Review Article

Applications of Artificial Intelligence in Ophthalmology: General Overview

Table 4

Introduction of metrics to evaluate the performance of a model.

MetricsDefinitions

AccuracyMeasure the proportion of samples that are correctly identified by a classifier among all samples
Sensitivity/recall rateThe number of actual positives divided by the number of all samples that have been identified as positive by a gold standard
SpecificityThe number of actual negatives divided by the number of all samples that have been identified as negative by a gold standard
Precision/positive predictive valueThe number of actual positives divided by the number of all positives identified by a classifier
Kappa valueTo examine the agreement between a model with the ground truth on the assignment of categories
Dice coefficient/F1 scoreHarmonic average of the precision and recall, where a F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0