Confusion Matrixš
I first heard this term back when I had worked with data and a model, however this was quite confusing for some time until I got to know its strength. In this article Iāll try my level best to keep it simple and make you understand what it is.
What is Confusion Matrix?
Wikipedia says in the field of machine learning and specifically the problem of statistical classification, a confusion matrix, also known as an error matrix,[9] is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one.
To simply put, once we transform data, build a model and get some predictions how do you say that how well your model is performing thatās where confusion matrix comes into picture. So, it's like metric to calculate the performance of our model.
How to calculate such a matrix? Would our next question be right?
- Prepare validation-dataset that should be close to real-time data in which our model might run along with ground-truth(expected-output).
- Determine number of classes that our model predicts.
- Make predictions with each sample in our testing data.
- Now we have both expected values and predicted values, where we can draw the count of correct and incorrect predictions with respect to each class.
From the above findings we create a table/matrix as follows:
- Expected down the side = Each row of matrix corresponds to a predicted class.
- Expected across the top = Each column of the matrix corresponds to an actual class.
So, there might be multiple questions popping-up after seeing this like what all are these TPs, TNs, FNs and FPs before jumping into them with a case study first let us know some definitions.
Specificity: It is the condition of our model being able to categorize a sample to a particular class.
Sensitivity: Sensitivity measures how often a test correctly generates a positive result for the class in which it is tested.
Precision: Precision measures how close our predicted value to the ground truth.
Negative Predicted Value: It measures the proportion of real negative values in the predicted negative probabilities.
Accuracy: Accuracy is one metric that tells us the fraction of predicted that our model is right.
Case Study
Let us get a bit clearer with a classification problem called facial recognition by following the steps we had discussed earlier.
- Determine positive and negative classes from our samples along with ground-truth values.
- Positive being samples are same (Same person).
- Negative being samples are different (different people).
- Letās assume that we have tested our model and gathered both predicted values as well as ground truth, then our confusion matrix looks like this...
So, now letās get into some more details of same and different
So, if our ground truth says the two photos are of same person and our trained model says it, they are same then it the correct prediction and it is called as true positive.
If our ground truth says the two photos belong to same persons and model says they are different than its false prediction and model predicted negative so it is false negative.
If our ground truth says two people are different but our model predicted them as same persons which is positive class but predicted false so this is called as false positive.
If our ground truth says two people are different and our model predicted them differently that is negative class and model predicted true or correctly so it is called true negative.
So, Accuracy becomes the division of true predictions upon total predictions.
Accuracy = (TP + TN)/ (TP + FP + TN + FN)
Example
Conclusion
To Summarize, the confusion matrix is a great tool to analyze our classification model closely to real-time data and help us understand model performance.