Review Article
Applications of Artificial Intelligence in Ophthalmology: General Overview
Table 4
Introduction of metrics to evaluate the performance of a model.
| Metrics | Definitions |
| Accuracy | Measure the proportion of samples that are correctly identified by a classifier among all samples | Sensitivity/recall rate | The number of actual positives divided by the number of all samples that have been identified as positive by a gold standard | Specificity | The number of actual negatives divided by the number of all samples that have been identified as negative by a gold standard | Precision/positive predictive value | The number of actual positives divided by the number of all positives identified by a classifier | Kappa value | To examine the agreement between a model with the ground truth on the assignment of categories | Dice coefficient/F1 score | Harmonic average of the precision and recall, where a F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0 |
|
|