Member-only story
Precesion Recall and F1 Score
Precision is the fraction of predicted positive instances that are actually positive. In other words, it is the number of true positives divided by the number of true positives plus the number of false positives.
Recall is the fraction of actual positive instances that are predicted positive. In other words, it is the number of true positives divided by the number of true positives plus the number of false negatives.
F1 score is the harmonic mean of precision and recall. The harmonic mean is more sensitive to low values than the arithmetic mean, so it gives more weight to the precision and recall scores that are closer to 0.
A perfect model would have a precision and recall of 1, which would give an F1 score of 1. However, in practice, no model is perfect, so the F1 score will always be less than 1.
The F1 score is a more comprehensive measure of model performance than accuracy because it takes both precision and recall into account. Accuracy is only concerned with the number of correct predictions, while precision and recall are also concerned with the number of incorrect predictions.
The F1 score is especially useful for models with an uneven class distribution. In this case, accuracy may not be a reliable measure of performance because the majority class will have a much higher accuracy…