296. Best Meeting Point

本文探讨了一个二维网格中寻找一个点,使得该点到所有标记点的曼哈顿距离之和最小的问题。通过将问题简化为一维并找到中间点来解决,提供了一个高效的算法实现。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Problem

A group of two or more people wants to meet and minimize the total travel distance. You are given a 2D grid of values 0 or 1, where each 1 marks the home of someone in the group. The distance is calculated using Manhattan Distance, where distance(p1, p2) = |p2.x - p1.x| + |p2.y - p1.y|.

For example, given three people living at (0,0)(0,4), and (2,2):

1 - 0 - 0 - 0 - 1
|   |   |   |   |
0 - 0 - 0 - 0 - 0
|   |   |   |   |
0 - 0 - 1 - 0 - 0

The point (0,2) is an ideal meeting point, as the total travel distance of 2+2+2=6 is minimal. So return 6


Solution

如果是一维的,那就是以最中间的那个点为见面地点( buggy : 不是坐标的平均值
class Solution {
public:
    int minTotalDistance(vector<vector<int>>& grid) {
        vector<int> iArr, jArr;
        int iSum = 0, jSum = 0;
        for( int i = 0; i < grid.size(); i++){
            for( int j = 0; j < grid[0].size(); j++){
                if(grid[i][j] == 1){
                    iArr.push_back(i);
                    jArr.push_back(j);
                    iSum += i;
                    jSum += j;
                }
            }
        }
        
        const int N = iArr.size();
        sort( jArr.begin(), jArr.end());
        int iMid = iArr[N/2], jMid = jArr[N/2], rst = 0;
        
        for( int i = 0; i < N; i++){
            rst += abs( iArr[i] - iMid) + abs( jArr[i] - jMid);
        }
        return rst;
    }
};


### AUC-ROC Score in Machine Learning #### Definition of AUC-ROC The Area Under the Curve - Receiver Operating Characteristic (AUC-ROC) is a performance measurement for classification problems at various thresholds settings. ROC curves plot True Positive Rate (TPR) against False Positive Rate (FPR). The TPR is synonymous with sensitivity or recall, while FPR can be calculated as one minus specificity. An ideal point on this curve would achieve maximum true positives and minimum false positives. #### Calculation Method To calculate the AUC value, multiple methods exist but generally involve plotting points where each represents different threshold values from which predictions are made by classifiers. For every possible cutoff between classes, compute both rates mentioned above until all unique probability scores have been used up. Then connect these dots smoothly to form an area under such formed curve that ranges theoretically within [0,1]. Higher areas indicate better overall ability across any chosen discrimination boundary without needing prior knowledge about what exact level might work best during deployment phases later down line[^1]. ```python from sklearn import metrics import matplotlib.pyplot as plt # Assuming y_true contains actual labels and y_pred_prob contains predicted probabilities. fpr, tpr, _ = metrics.roc_curve(y_true, y_pred_prob) auc = metrics.auc(fpr, tpr) plt.plot(fpr,tpr,label="data 1, auc="+str(auc)) plt.legend(loc=4) plt.show() ``` #### Application Scenarios In practical applications like financial lending described elsewhere[^2], AUC helps assess how well models distinguish between good borrowers who repay loans versus bad ones likely defaulting over time frames agreed upon contractually beforehand. By evaluating through cross-validation techniques repeatedly split datasets into training/testing sets ensuring robustness generalizability outside seen samples only thus far encountered previously throughout experimentation stages conducted internally before going live externally facing customers directly eventually after thorough testing cycles completed satisfactorily meeting predefined criteria set out initially when project commenced planning phase started originally some period ago now coming closer towards final implementation soon hopefully barring unforeseen complications arise unexpectedly causing delays pushing back timelines further than anticipated currently planned schedule allows ideally speaking optimistically looking forward positively ahead into future prospects lying just beyond immediate horizon awaiting us all together collectively moving forward progressively step-by-step cautiously yet confidently nonetheless steadily marching onward toward achieving ultimate goals objectives aims targets milestones benchmarks whatever terminology preferred most aptly describes desired end results sought after ultimately here today discussing specifically around topic area concerning itself primarily centered upon machine learning evaluation metric known formally recognized widely accepted commonly referred simply put as "AUC". --related questions-- 1. What other evaluation metrics complement AUC-ROC? 2. How does class imbalance affect AUC interpretation? 3. Can you explain the difference between precision-recall curves and ROC curves? 4. In which scenarios should one prefer using AUC over accuracy?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值