Let’s ROC!#

There is in machine learning a very convenient visual to not only see the performance of a binary classifier but also compare different classifiers between each others. It is the ROC curve (beloved by experimental particle physicists). Before explaining how to draw it, let’s first introduce key ingredients.

Ingredients to ROC#

Those elements are concepts previously encountered, yet but baring other names: the score and decision threshold.

We saw the classification is done using the output of the sigmoid function and a decision boundary of \(y=0.5\) (see Definition 17 in section What is the Sigmoid Function?). Sometimes the classifier’s output is also called score, aka an estimation of probability, and the decision boundary can be also referred to decision threshold. It’s a cut value above which a data sample is predicted as a signal event (\(y=1\)) and below which it is classified as background (\(y=1\)). We chose \(y^\text{thres.}=0.5\) to cut our sigmoid half way through its output range, but for building a ROC curve, we will vary this decision threshold.

Now let’s recall (pun intended) the True Positive Rate that was defined above in Definition 30, but let’s write it again for convenience and add other metrics:

Definition 32

 
True Positive Rate (TPR), also called recall and sensitivity

(35)#\[\begin{equation} \text{TPR} = \frac{\text{True Positives}}{\text{Actual Positives}} = \frac{\text{True Positives}}{\text{True Positive} + \text{False Negative}} \end{equation}\]

 
True Negative Rate (TNR), also called specificity

Ratio of negative instances correctly classified as negative.

(36)#\[\begin{equation} \text{TNR} = \frac{\text{True Negatives}}{\text{Actual Negatives}} = \frac{\text{True Negatives}}{\text{True Negatives} + \text{False Positives}} \end{equation}\]

 
False Positive Rate (FPR)

Ratio of negative instances that are incorrectly classified as positive.

(37)#\[\begin{equation} \text{FPR} = \frac{\text{False Positives}}{\text{Actual Negatives}} = \frac{\text{False Positives}}{\text{True Negatives} + \text{False Positives}} \end{equation}\]

 
The False Positive Rate (FPR) is equal to:

(38)#\[\begin{equation} \text{FPR} = 1 - \text{TNR} = 1 - \text{specificity} \end{equation}\]

We have our ingredients. So, what is a ROC curve?

Building the ROC curve#

Definition 33

The Receiver Operating Characteristic (ROC) curve is a graphical display that plot the True Positive Rate (TPR) against the False Positive Rate (FPR) for each value of the decision threshold \(T\) going over the classifier’s output score range.

Let’s unpack this. First, recall the logistic function in classification. As input, we have the mixture of the input features \(\boldsymbol{x}\) with the model parameters \(\boldsymbol{\theta}\) (in the linear case, it is a simple dot product, but let’s be general here: it can be any combination). The output of the sigmoid is the prediction, or score. After training our model, we can use the validation dataset that contains the true labels to collect for each class, signal (1) and background (0), the predicted scores. Then it is possible to draw two distributions from those scores, as seen in the schematics below on the left:

../_images/modEval_score_distrib_roc.png

Fig. 17 : The logistic function predicts scores for two classes, the background (in blue) and the signal (in red). The distributions of the scores are shown on the left as two normalized smooth curves.
Image from the author
#

In the previous lecture, the decision boundary is illustrated as an horizontal line on the sigmoid plot. In the distribution of the scores, it is now a vertical threshold. In other words: what is predicted signal, i.e. data points whose scores are above the decision boundary, is now the integral of the scores on the right of the threshold. And what is predicted background, scores below the decision boundary, correspond to the integral of the curves on the left of the threshold.

As these score distributions overlap, there will be errors in the predictions!

Let’s see it closer:

../_images/modEval_distrib_nolabels.png

Fig. 18 : Overlapping distributions of the scores between our two classes means prediction errors.
Image from the author
#

Exercise

From the figure above, identify where are the True Positives, True Negatives, False Positives and False Negatives.

For a given threshold, it is possible to compute the True Positive Rate (TPR) and False Positive Rate (FPR). The ROC curve is the ensemble of the points (TPR, FPR) for all threshold values.

../_images/modEval_ROC.png

Fig. 20 : ROC curve (dark orange) illustrating the relationship between true positive rate (TPR) and false positive rate (FPR). The dashed diagonal line represents a random classifier.
Image from the author
#

Exercise

If we move the threshold to the right (\(x \to +\infty\)), in which direction would it corresponds to on the ROC curve? Right or left?

Comparing Classifiers#

The ROC has the great advantage to see how different classifiers compare through all the ranges of signal and background efficiencies.

../_images/modEval_roc_wiki.png

Fig. 21 : Several ROC curves can be overlaid to then compare classifiers. A poor classifier will be near the “random classifier” line or in other words using pure luck (it will be right on average 50% of the time). The ideal classifier corresponds to the top left dot, where 100% of the real signal samples are correctly classified as signal and thus the False Positive Rate is zero.
Image: Modified work by the author, original by Wikipedia
#

We can see from the picture that the more the curve approaches the ideal classifier, the better it is. We can use the area under the ROC curve to have a single number to then quantitatively compare classifiers on their overall performance.

ROC curves can differ depending on the metrics used. In particle physics, the True Positive Rate is called signal efficiency. It is indeed how efficient the classifier is to correctly classify as signal (numerator) all the real signal (denominator). Zero is bad, TPR = 1 is ideal. The True Negative Rate is called background efficiency. Particle physics builds ROC curves slightly different than the ones you can see in data science; instead of using FPR it uses the background rejection, defined as the inverse of background efficiency. All of this to say that it’s important to read the graph axes first!

Definition 34

The Area Under Curve (AUC) is the integral of the ROC curve, from FPR = 0 to FPR = 1.

A perfect classifier will have AUC = 1.

While it is convenient to have a single number for comparing classifiers, the AUC is not reflecting how classifiers perform for specific ranges of signal efficiencies. It is always important while optimizing or choosing a classifier to check its performance in the range relevant for the given problem.