Receiver operating characteristic curve, i.e. ROC curve
- The ROC curve is created by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings.
- The true-positive rate is also known as sensitivity, recall or probability of detection in machine learning.
- The false-positive rate is also known as the fall-out or probability of false alarm and can be calculated as (1 − specificity).
- An ROC curve demonstrates several things:
- It shows the tradeoff between sensitivity and specificity (any increase in sensitivity will be accompanied by a decrease in specificity).
- The closer the curve follows the left-hand border and then the top border of the ROC space, the more accurate the test.
- The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the test.
- The area under the curve(AUC) is a measure of accuracy.
Reference:
http://gim.unmc.edu/dxtests/roc2.htm
https://www.youtube.com/watch?v=OAl6eAyP-yo