|
Ref. | Dataset | Used method | Outcomes and metrics | Research challenges |
|
[20] | MESSIDOR database | HPTI (hyperparameter tuning inception)-V4 model | Sensitivity, specificity, accuracy, and precision factor | Implementation of different classification are models not included |
[21] | MESSIDOR database with 400 images | Histogram equalization and limited adaptive histogram equalization methods | Sensitivity, specificity, accuracy, precision, recall, F-score, and G-means | Alternative medical databases are not considered for performance evaluation |
[22] | MESSIDOR-2 dataset | Principal component analysis (PCA) with firefly algorithm | Accuracy, precision, recall, sensitivity, and specificity | Not suitable for high-dimensional data in various domains |
[23] | EyePACS dataset | CNN-based binocular network with siamese-like structure | Sensitivity, specificity, and quadratic kappa score | Use of limited fundus image data having missing values are collected from same patient |
[24] | Dataset with 35126 fundus images from kaggle | Feature transfer learning and hyperparameter tuning method | Accuracy, sensitivity, and specificity | Variable image classification methods are not considered to improve model accuracy |
[25] | MESSIDOR dataset | Capsule network architecture | Accuracy | Complete classes of image datasets are not trained in CapsNet |
[26] | Dataset has 2000 images from the Kaggle | Fuzzy C-means algorithm | Accuracy | System is not implemented in GPU environment, considered limited data sources |
[27] | Standard diabetic retinopathy database (DIARETDB1) | CNN with entropy images grayscale unsharp-masking (UM) method | Accuracy, sensitivity, and specificity | Noninclusion of larger datasets |
[28] | Kaggle dataset of 21,123 images | Patch-based DNN | Accuracy, sensitivity, and specificity | Considering distinctive labelling model is costly, use of limited image data |
[29] | Dataset has 240 images taken from Kaggle | SVM and random Forest techniques | Accuracy | Alternative image classifier methods are not implemented |
[30] | MESSIDOR2 and E-ophtha databases | Data-driven algorithm for deep feature extraction | Sensitivity, specificity, and AUC | Time-consuming and increased sensitivity for dataset with multiple variances |
[31] | DDR dataset | Deep neural network algorithms-VGGNet-16, ResNet-18, GoogleNet, and DenseNet-121. | Average precision (AP) and IoU (intersection over union) metrics for each type of lesion identification | Detection and segmentation of fundus image lesions from various perspectives are critical, use of larger image datasets are not considered |
[32] | Kaggle repository dataset | Score propagation deep learning model | Sensitivity and specificity performance for lesion recognition | Pixel score identification process is not included |
[33] | STARE & DRIVE image dataset | Feature extraction + image segmentation method | Sensitivity and specificity performance for lesion recognition | Hybrid robust DL methods are not included for performance enhancements |
[34] | EyePACS-1 and Messidor-1 dataset | Deep-CNN-based model | High sensitivity and specificity | Excludes the possibility of model implementation in clinical environment |
[35] | STARE, DIARETDB1, MESSIDOR, DRIVE, STARE, REVIEW, and E-ophtha datasets | Deep CNN model with inception-V4 algorithm | Accuracy, precision, and recall | Data from other domains are not included |
[36] | Data collected from patients in central India | PCA (principal component analysis) and linear regression | Accuracy | Larger datasets are not considered |
[37] | EyePACS dataset | Gaussian filters and EfficientNet | Kappa score and accuracy | Exclusion of balanced dataset leading to reduced efficiency |
[38] | Messidor-1 | CLAHE method and CNN + transfer learning approach | Accuracy | Exclusion of minor diseases |
[39] | EyePACS dataset | Deep-CNN + inception network | Sensitivity, specificity, accuracy, and precision | Automated image prognosis system is not included |
|