![]() ![]() We will use the caret library in R to calculate the confusion matrix. R: Let’s use R code to create a confusion matrix now. Print (classification_report(actual, predicted)) OUTPUT -> Print ('Accuracy Score :',accuracy_score(actual, predicted)) Results = confusion_matrix(actual, predicted) We have to import the confusion matrix module from sklearn library which helps us to generate the confusion matrix.īelow is the Python implementation of the above explanation: # Python script for confusion matrix creation.įrom trics import confusion_matrixįrom trics import accuracy_scoreįrom trics import classification_reportĪctual = PYTHON: First let’s take the python code to create a confusion matrix. Let’s use both python and R codes to understand the above dog and cat example that will give you a better understanding of what you have learned about the confusion matrix so far. Let’s calculate confusion matrix using above cat and dog example:ģ. Specificity determines the proportion of actual negatives that are correctly identified. As compared to Arithmetic Mean, Harmonic Mean punishes the extreme values more. It is the Harmonic Mean of Precision and Recall. So to make them comparable, we use F-Score. It is difficult to compare two models with different Precision and Recall. Trick to remember: Pre cision has Pre dictive Results in the denominator. Precision is defined as the ratio of the total number of correctly classified positive classes divided by the total number of predicted positive classes. Or, out of all the predictive positive classes, how much we predicted correctly. Precision should be high. Recall is defined as the ratio of the total number of correctly classified positive classes divide by the total number of positive classes. Or, out of all the positive classes, how much we have predicted correctly. Recall should be high. Classification Accuracy:Ĭlassification Accuracy is given by the relation: Recall (aka Sensitivity): You predicted that animal is not a cat but it actually is. You predicted that animal is a cat but it actually is not (it’s a dog).įalse Negative (Type 2 Error): You predicted negative and it’s false. You predicted that animal is not a cat and it actually is not (it’s a dog).įalse Positive (Type 1 Error): You predicted positive and it’s false. True Negative: You predicted negative and it’s true. You predicted that an animal is a cat and it actually is. True Positive: You predicted positive and it’s true. Remember, we describe predicted values as Positive/Negative and actual values as True/False. Correct and incorrect predictions are summarized in a table with their values and broken down by each class. Confusion Matrix:Ī confusion matrix provides a easy summary of the predictive results in a classification problem. Summary and intuition on different measures: Accuracy, Recall, Precision & Specificity 1. How to create a confusion matrix in Python & R.Ĥ. How to calculate a confusion matrix for a 2-class classification problem using a cat-dog example.ģ. What is a confusion matrix and why it is needed.Ģ. Most performance measures such as precision, recall are calculated from the confusion matrix.ġ. So, a confusion matrix or error matrix is used for summarizing the performance of a classification algorithm.Ĭalculating a confusion matrix can give you an idea of where the classification model is right and what types of errors it is making.Ī confusion matrix is used to check the performance of a classification model on a set of test data for which the true values are known. In the case of a classification problem, having only one classification accuracy might not give you the whole picture. ![]()
0 Comments
Leave a Reply. |