Journal of Ophthalmology

Journal of Ophthalmology / 2021 / Article
Special Issue

Applications of Artificial Intelligence in Modern Ophthalmology Practice

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 6651175 | https://doi.org/10.1155/2021/6651175

Toshihiko Nagasawa, Hitoshi Tabuchi, Hiroki Masumoto, Shoji Morita, Masanori Niki, Zaigen Ohara, Yuki Yoshizumi, Yoshinori Mitamura, "Accuracy of Diabetic Retinopathy Staging with a Deep Convolutional Neural Network Using Ultra-Wide-Field Fundus Ophthalmoscopy and Optical Coherence Tomography Angiography", Journal of Ophthalmology, vol. 2021, Article ID 6651175, 10 pages, 2021. https://doi.org/10.1155/2021/6651175

Accuracy of Diabetic Retinopathy Staging with a Deep Convolutional Neural Network Using Ultra-Wide-Field Fundus Ophthalmoscopy and Optical Coherence Tomography Angiography

Academic Editor: Heba A El Gendy
Received24 Dec 2020
Revised11 Mar 2021
Accepted26 Mar 2021
Published05 Apr 2021

Abstract

Purpose. The present study aimed to compare the accuracy of diabetic retinopathy (DR) staging with a deep convolutional neural network (DCNN) using two different types of fundus cameras and composite images. Method. The study included 491 ultra-wide-field fundus ophthalmoscopy and optical coherence tomography angiography (OCTA) images that passed an image-quality review and were graded as no apparent DR (NDR; 169 images), mild nonproliferative DR (NPDR; 76 images), moderate NPDR (54 images), severe NPDR (90 images), and proliferative DR (PDR; 102 images) by three retinal experts by the International Clinical Diabetic Retinopathy Severity Scale. The findings of tests 1 and 2 to identify no apparent diabetic retinopathy (NDR) and PDR, respectively, were then assessed. For each verification, Optos, OCTA, and Optos OCTA imaging scans with DCNN were performed. Result. The Optos, OCTA, and Optos OCTA imaging test results for comparison between NDR and DR showed mean areas under the curve (AUC) of 0.79, 0.883, and 0.847; sensitivity rates of 80.9%, 83.9%, and 78.6%; and specificity rates of 55%, 71.6%, and 69.8%, respectively. Meanwhile, the Optos, OCTA, and Optos OCTA imaging test results for comparison between NDR and PDR showed mean AUC of 0.981, 0.928, and 0.964; sensitivity rates of 90.2%, 74.5%, and 80.4%; and specificity rates of 97%, 97%, and 96.4%, respectively. Conclusion. The combination of Optos and OCTA imaging with DCNN could detect DR at desirable levels of accuracy and may be useful in clinical practice and retinal screening. Although the combination of multiple imaging techniques might overcome their individual weaknesses and provide comprehensive imaging, artificial intelligence in classifying multimodal images has not always produced accurate results.

1. Introduction

Diabetic retinopathy (DR) has been one of the major causes of visual impairment and blindness. According to Sabanayagam et al., the annual incidence of DR ranges from 2.2% to 12.7%, and the progression ranges from 3.4% to 12.3% [1]. Moreover, a systematic review that examined the progression of DR to proliferative DR and severe vision loss in high-income countries showed a downward trend since the 1980s [2]. However, 80% of individuals with diabetes reside in developing countries, of which China and India comprise a large proportion [3]. Early diagnosis and prompt treatment of DR have been shown to prevent blindness [4]. While diabetic eye care has been mainly reliant on the number of ophthalmologists and necessary healthcare infrastructure [5], performing fundus examination, which is performed by ophthalmologists, for all patients with diabetes is unrealistic and expensive. Furthermore, expenses associated with DR have been substantial, whereas the financial impact may be even more severe given that several patients with this complication live in developing countries [6, 7], many of which have an inadequate number of ophthalmologists [8].

In contrast, automated image processing has proven to be a promising alternative for retinal fundus image analysis and its future application in eye care. Several recent studies have utilized state-of-the-art deep-learning (DL) algorithms for the automated detection of DR from a large number of fundus images [912]. In April 2018, the United States Food and Drug Administration approved the world’s first artificial intelligence (AI) medical device for detecting DR, the IDx-DR. This AI system has allowed for specialty-level diagnostics to be applied in primary care settings [10, 13, 14], with studies expecting image diagnosis using AI to be a solution to the shortage of physicians and high medical expenses for specialists [15].

Several studies that examined the efficacy of automated detection have used standard fundus cameras that provide 30° or 50° images. In recent years, however, various fundus cameras have been developed, such as the ultra-wide-field (UWF) imaging fundus camera and optical coherence tomography angiography (OCTA).

UWF, otherwise known as Optos (Optos 200Tx; Optos Plc, Dunfermline, United Kingdom), is a non-contact, noninvasive imaging modality that can capture up to 200° of visible fundus and has become essential for understanding and managing the peripheral retinal pathologies of adult diseases such as diabetes and retinal vein occlusions [16, 17]. Indeed, one report showed the accuracy of UWF-based AI in the detection of DR [18].

OCTA has been devised to noninvasively detect moving objects within the fundus, such as flowing red blood cells, as a flow signal and visualize it as a blood vessel [19, 20]. In a similar manner, studies have suggested the accuracy of OCTA-based AI for detecting DR [21, 22].

However, manual analysis of multiple fundus images for accurate screening in clinical practice requires a substantial effort from ophthalmologists. As such, the objective of the present study was to investigate the accuracy of AI using different composite images.

2. Methods

2.1. Dataset

The study was approved by the Ethics Committee of Tsukazaki Hospital (Himeji, Japan) (no. 171001) and Tokushima University Hospital (Tokushima, Japan) (no. 3079) and was conducted in accordance with the tenets of the Declaration of Helsinki. Informed consent was obtained from either the participants or their legal guardians after the nature and possible consequences of the study (shown in Supplemental Human Studies Consent File 1) were explained to them.

The study dataset comprised 491 images and data from patients with diabetes. The data of those without fundus diseases between 2016 and 2019 were extracted from the clinical database of the ophthalmology departments of Saneikai Tsukazaki Hospital and Tokushima University Hospital. Images were reviewed by three retinal specialists to assess the presence of DR or NDR and registered in an analytical database. All patients underwent Optos (Optos 200Tx®, Nikon), OCTA (OCT Triton plus®, Topcon), and UWF fluorescein angiography. OCTA scans were acquired over a 6 × 6 mm2 region.

En face images of the superficial plexus, deep plexus, outer retina, and choriocapillaris and the density map were extracted (Figure 1). DR levels were defined using the Early Treatment Diabetic Retinopathy (ETDRS) Severity Scale on the basis of the retinal images of the patients [4]. The 491 images that passed image-quality review were graded as follows: no apparent DR (NDR) (169 images), mild nonproliferative DR (NPDR) (76 images), moderate NPDR (54 images), severe NPDR (90 images), and proliferative DR (PDR) (102 images). All participants underwent comprehensive ophthalmological examinations, including slit-lamp biomicroscopy, dilated ophthalmoscopy, color fundus photography, and SS-OCTA. Data on age, sex, and previous hemoglobin A1c (National Glycohemoglobin Standardization Program) levels were obtained. Diabetes was diagnosed in accordance with the criteria of the 2016 Japanese Clinical Practice Guideline for Diabetes [23].

The present study examined the results of tests 1 and 2 to identify NDR and PDR. For each verification, Optos, OCTA, and Optos OCTA imaging were performed. We described how Optos OCTA images are created in the Image Processing Section.

This study used K-fold cross-validation (k = 5), which has been described in detail elsewhere [24, 25]. Briefly, image data were divided into K groups, after which K − 1 groups were used for training data, while one group was used for validation data. This process was repeated K times until each of the K groups became a validation dataset. The present study divided the data into nine groups. Images of the training dataset were augmented by adjusting for brightness, gamma correction, histogram equalization, noise addition, and inversion, which increased the amount of learning data 18 times. The deep convolutional neural network (DCNN) model was created and trained using data from preprocessed images, a method similar to those reported in previous studies [26, 27].

2.2. Image Processing

The aspect ratio of the original Optos images was 3900 × 3072 pixels. For analysis, the aspect ratio of all the images was changed and resized to 256 × 192 pixels.

The size of the concatenated original OCTA images was 640 × 320 pixels. The images of the four en face zones (superficial plexus, deep plexus, outer retina, and choriocapillaris) were extracted. The images of the superficial plexus, deep plexus, outer retina, and choriocapillaris were placed on the upper left, upper right, lower left, and lower right (Figure 2(a)), with the original input images resized to 256 × 192 pixels as the analysis time was reduced.

The Optos OCTA image (Figure 2(b)) was created by combining Optos and the OCTA images vertically and resizing them to 256 × 192 pixels. Representative images of NDR, mild NPDR, and PDR are presented in Figure 3.

2.3. Deep Learning

In this study, a visual geometry group, −16 DCNN (VGG16) (Figure 4) [28], was used as the analytical model; the technical details of VGG16 will be described in the original paper, and the setup values for the present study will be described later. Before that, a brief outline for better understanding is given to the ophthalmologist.

2.4. Outline of VGG16

VGG16 automatically learns the local features of images and generates a classification model [29, 30]. It scans the entire image as often as 13 times in a small area (local receptive field) to see how many partial features (e.g., a long nose for an elephant and a long neck for a giraffe) the target image has. This scan is performed by moving the area pixel by pixel to examine the entire image comprehensively. It is called convolution because the resulting values are convolved into a single pixel value [2931]. For example, if a whole image with 81 pixels (9 × 9 pixels) is scanned by shifting one pixel at a time in a local receptive field of 3 × 3 pixels, the scan is performed seven times in the horizontal direction and seven times in the vertical direction; thus, the scan result is compressed into 49 pixels (7 × 7). This means that the amount of information is collapsed to 60% (49/81). Furthermore, this feature is called a filter or channel. ReLU [32] was used as a function to highlight the feature extraction for each layer. An automatic adjustment called backpropagation is performed to strengthen or weaken the features to increase accuracy during the learning process of correct and incorrect answers. In VGG16, this feature pattern is increased from 64 types (called channels) to 128, 256, and 512 types as each block of the convolution process progresses. In addition, VGG16 also performs a process called max pooling five times, which reduces the number of pixels in each block of the convolution process by half for emphasizing features across the entire image (e.g., red tones for a fire scene and bright tones for a daytime photograph) [33]. The final combined layer (fully connected layer) accepts all the information from the previous layer without thinning it out and is responsible for linking it to probability values by passing through the Softmax function for binary classification, which is the purpose of this study.

2.5. VGG16 Settings Used in this Study

The aspect ratio of the original Optos images was 3900 × 3072 pixels, whereas that of the OCTA images was 640 × 320 pixels. For analysis, we changed the aspect ratio of all the input images and resized them to 256 × 192 pixels. Given that the RGB image input ranged from 0 to 255, we normalized it to a range of 0−1 by dividing it by 255. To increase the learning speed and improve performance even with a small amount of data, the initial weight values of the first four convolution blocks were used as parameters learned by ImageNet using the transfer learning method [34]. The Momentum Stochastic gradient descent algorithm was used to update the parameters of the model (learning ratio = 0.0005, inertial term = 0.9) [35, 36]. The construction and verification of the neural network were performed using a Python Keras (https://keras.io/ja/) with the backend as the tensorflow.

2.6. Outcome

This study evaluated the performance of six verifications, namely, tests 1 and 2 for Optos, OCTA, and Optos OCTA images. Receiver-operating characteristic (ROC) curves were created on the basis of the abilities of the DL models to discriminate between NDR and DR images (test 1), and between NDR and PDR images (test 2). These curves were evaluated using the area under the curve (AUC), sensitivity, and specificity. Sensitivity and specificity were considered positive (DR in test 1 and PDR in test 2) when the probability of the neural network output was greater than 0.5. The ROC curve was derived using Python scikit-learn (http://scikit-learn.org/stable/tutorial/index.html).

2.7. Statistical Analysis

To compare patient background, age was analyzed using Student's t-test, while the male-female ratios were compared using Fisher’s exact test. In all cases, a value of <0.05 was considered significant. All statistical processes were performed using Python Scipy (https://www.scipy.org/) and Python Statsmodels (http://www.statsmodels.org/stable/index.html).

For the AUC, the 95% confidential intervals (CIs) were obtained using the following formula [37]:

The mean AUC and SE(A) are the standard error of the AUC.

SE(A) was also obtained using the following formula [37]:where Np is the number of blepharoptosis images, Nn is the number of normal images, Q1 is the probability that two randomly chosen abnormal images both ranked with greater suspicion than a randomly chosen normal image, and Q2 is the probability that one randomly chosen abnormal image ranked with a greater suspicion than two randomly chosen normal images.

Q1 and Q2 were obtained using the following formula:

For sensitivity and specificity, 95% CIs were obtained using the Clopper-Pearson method [38].where F0.025(a, b) is the 0.025 quantile from an F-distribution with a, b degrees of freedom, k is the number of successes, and n is the number of trials.

3. Results

3.1. Background

The baseline characteristics of the development and clinical validation datasets are described in Table 1.


NDRMildModerateSeverePDR

Number of images169765490102
Patients9552405871
Women (%)(42.6)(40.8)(38.9)(35.6)(34.3)
Mean age, years (SD)66.8 ± 9.667.2 ± 9.767.4 ± 10.366.8 ± 8.659.0 ± 11.6
Left fundus (%)(49.1)(47.4)(50.0)(48.9)(52.0)

NDR, no apparent diabetic retinopathy; PDR, proliferative diabetic retinopathy.
3.2. Evaluation of Model Performance

In test 1, Optos, OCTA, and Optos OCTA images had an AUC of 0.790 (95% CI: 0.751–0.830), 0.883 (95% CI: 0.854–0.912), and 0.847 (95% CI: 0.814–0.880), respectively.

The ROC curves are shown in Figure 5.

In test 2, the Optos, OCTA, and Optos OCTA images had AUC of 0.981 (95% CI: 0.962–1.064), 0.928 (95% CI: 0.892–0.964), and 0.964 (95% CI: 0.938–0.990), respectively. The ROC curves are shown in Figure 6. Table 2 shows the sensitivity and specificity of the results of the analyses.


TestDeviceSensitivitySpecificity

Test 1Optos80.9 (76.2–85.1)55.0 (47.2–62.7)
OCTA83.9 (79.4–87.7)71.6 (64.2–78.3)
Optos OCTA78.6 (73.7–82.9)69.8 (62.3–76.6)

Test 2Optos90.2 (82.7–95.2)97.0 (93.2–99.0)
OCTA74.5 (64.9–82.6)97.0 (93.2–99.0)
Optos OCTA80.4 (71.4–87.6)96.4 (92.4–98.7)

4. Discussion

The present study investigated the efficacy of the DL method in identifying the difference between NDR and DR on the basis of 491 multimodal images. The better DL algorithm showed appropriate sensitivity and specificity (AUC: 0.847; sensitivity: 78.6%; specificity: 69.8%), as well as good results with respect to differentiating NDR from PDR (AUC: 0.964; sensitivity: 80.4%; specificity: 96.4%). The ability to discriminate between NDR and PDR presented herein was comparable with that reported in previous studies [915]. All images in this study were obtained from patients with diabetes. Even patients with NDR showed significantly lower blood vessel density than healthy individuals, especially in the deep layer [39]. The multimodal imaging modality used in this study did not provide accurate results. Moreover, the multimodal images captured using AI were used in both tests 1 and 2, with the discriminative ability of Optos and OCTA being reversed in test 2.

First, OCTA with DL properly detected the difference between NDR and DR (test 1). The current international classification recommends diagnosis based on the presence of superficial retinal lesions. Therefore, the accuracy of OCTA, whose imaging range is narrower than that of UWF imaging, in determining the DR stage has generally been poor. However, OCTA images showed significant differences between NDR and DR even with an unevenly enlarged acicularity index and foveal avascular zone, indicating a relatively satisfactory staging accuracy [40]. When comparing patients with early-stage DR, imaging methods that show the local area are better than those that only show the whole area. Given that DR-related microvasculature damage may actually begin around the macula, narrow images can be expected to have the best predictive sensitivity for DR [41].

Second, Optos showed more accurate results in distinguishing NDR from PDR (test 2). Once a patient has developed DR, especially severe cases (e.g., PDR), a wider range of images can increase the diagnosis rate. Retinopathy lesions in DR that predominantly develop around the standard field defined in ETDRS 7 [42] are considered predominantly peripheral lesions, the extent of which is associated with retinopathy progression [43, 44]. Furthermore, this cohort included eyes treated with and eyes treated without a pan retinal photocoagulation (PRP) laser.

Progress in traditional technologies, such as digital fundus photography, along with recent advancements in various imaging modalities, has provided clinicians with new information and improved efficiency. Tran and Pakzad-Vaezi reported the benefits of multimodal imaging of DR and the clinical applications of several imaging techniques in DR including color photography, OCT, OCTA, and adaptive optics [45].

Furthermore, the use of the combination of DCNN and these multimodal images in diagnosing DR is expected to increase in the future, and the use of DCNN in the analysis of retinal images is appealing given its suitability with the current trend of teleophthalmology and telemedicine [46], and cost-effectiveness [47]. Considering that an automated DR grading software can potentially offer better efficiency, reproducibility, and early detection of DR, the use of this grading software in the screening of the even-increasing number of individuals with diabetes should help reduce the healthcare burden. The use of multimodal images with DCNN would enable screening for referable DR in remote areas where services of an ophthalmologist are unavailable. However, understanding the indications and limitations of each technology allows clinicians to gain the most information from each modality and thereby optimize patient care. In an actual human clinical setting, the combination of multiple imaging techniques can overcome their individual weaknesses and provide a more comprehensive representation. Such an approach helps in the accurate localization of a lesion and understanding the pathology in posterior segment. Considering that the major technological advancements in imaging over the past decade have improved our understanding and knowledge regarding DR, a multimodal approach to imaging has become the standard of care [48]. However, the present study revealed that multimodal diagnosis using AI did not always yield the best results.

The present study has several limitations. One of the major issues of this study is the small number of images for training. Many DL researchers agree that such a small number of data in each category is insufficient to test the effectiveness of the proposed method. Deep learning generally requires more than a million samples to train without overfitting. Another limitation is that this cohort included eyes treated with and eyes treated without a PRP laser, which may have confounded our results.

In summary, our study suggests that the use of AI in classifying multimodal images did not always produce accurate results and showed advantages and disadvantages depending on the stage. Although combination of DCNN and multimodal images certainly provides better result, it is not particularly superior to medical examination. Face-to-face examinations by ophthalmologists are indispensable for a definite diagnosis.

5. Conclusions

Although UWF fundus ophthalmoscopy and OCTA images with a DCNN were effective in diagnosing DR, the use of AI in diagnosing multimodal images did not always produce accurate results.

Data Availability

The data that support the findings of this study are available from the corresponding author, Hitoshi Tabuchi, upon reasonable request.

Ethical Approval

Approval was obtained from the institutional review boards of Saneikai Tsukazaki Hospital and Tokushima University Hospital to perform this study.

Conflicts of Interest

Toshihiko Nagasawa, Hitoshi Tabuchi, Hiroki Masumoto, and Zaigen Ohara are employees of Tsukazaki Hospital (Himeji, Japan).

Acknowledgments

The authors thank Masayuki Miki and orthoptists at Tsukazaki Hospital for their support in data collection. Hitoshi Tabuchi’s laboratory of Hiroshima University received donations from Topcon Corporation (Tokyo, Japan) and Glory Corporation (Himeji, Japan).

Supplementary Materials

The authors show the research consent form from the patient in the Supplemental Human Studies Consent File 1. (Supplementary Materials)

References

  1. C. Sabanayagam, R. Banu, M. L. Chee et al., “Incidence and progression of diabetic retinopathy: a systematic review,” The Lancet Diabetes & Endocrinology, vol. 7, no. 2, pp. 140–149, 2019. View at: Publisher Site | Google Scholar
  2. T. Y. Wong, M. Mwamburi, R. Klein et al., “Rates of progression in diabetic retinopathy during different time periods: a systematic review and meta-analysis,” Diabetes Care, vol. 32, no. 12, pp. 2307–2313, 2009. View at: Publisher Site | Google Scholar
  3. A. Ramachandran, R. C. Wan Ma, and C. Snehalatha, “Diabetes in asia,” The Lancet, vol. 375, no. 9712, pp. 408–418, 2010. View at: Publisher Site | Google Scholar
  4. Early Treatment Diabetic Retinopathy Study Research Group, “Early photocoagulation for diabetic retinopathy: ETDRS report number 9,” Ophthalmology, vol. 98, no. 5, pp. 766–785, 1991. View at: Google Scholar
  5. S. Jones and R. T. Edwards, “Diabetic retinopathy screening: a systematic review of the economic evidence,” Diabetic Medicine, vol. 27, no. 3, pp. 249–256, 2010. View at: Publisher Site | Google Scholar
  6. L. Guariguata, D. R. Whiting, I. Hambleton, J. Beagley, U. Linnenkamp, and J. E. Shaw, “Global estimates of diabetes prevalence for 2013 and projections for 2035,” Diabetes Research and Clinical Practice, vol. 103, no. 2, pp. 137–149, 2014. View at: Publisher Site | Google Scholar
  7. S. Lin, P. Ramulu, E. L. Lamoureux, and C. Sabanayagam, “Addressing risk factors, screening, and preventative treatment for diabetic retinopathy in developing countries: a review,” Clinical & Experimental Ophthalmology, vol. 44, no. 4, pp. 300–320, 2016. View at: Publisher Site | Google Scholar
  8. S. Resnikoff, W. Felch, T.-M. Gauthier, and B. Spivey, “The number of ophthalmologists in practice and training worldwide: a growing gap despite more than 200 000 practitioners,” British Journal of Ophthalmology, vol. 96, no. 6, pp. 783–787, 2012. View at: Publisher Site | Google Scholar
  9. V. Gulshan, L. Peng, M. Coram et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA, vol. 316, no. 22, pp. 2402–2410, 2016. View at: Publisher Site | Google Scholar
  10. M. D. Abràmoff, Y. Lou, A. Erginay et al., “Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning,” Investigative Opthalmology & Visual Science, vol. 57, no. 13, pp. 5200–5206, 2016. View at: Publisher Site | Google Scholar
  11. R. Gargeya and T. Leng, “Automated identification of diabetic retinopathy using deep learning,” Ophthalmology, vol. 124, no. 7, pp. 962–969, 2017. View at: Publisher Site | Google Scholar
  12. D. S. W. Ting, C. Y.-L. Cheung, G. Lim et al., “Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes,” JAMA, vol. 318, no. 22, pp. 2211–2223, 2017. View at: Publisher Site | Google Scholar
  13. A. A. van der Heijden, M. D. Abramoff, F. Verbraak, M. V. van Hecke, A. Liem, and G. Nijpels, “Validation of automated screening for referable diabetic retinopathy with the IDx-DR device in the Hoorn Diabetes Care System,” Acta Ophthalmologica, vol. 96, no. 1, pp. 63–68, 2018. View at: Publisher Site | Google Scholar
  14. W. H. Yang, B. Zheng, M. N. Wu et al., “An evaluation system of fundus photograph-based intelligent diagnostic technology for diabetic retinopathy and applicability for research,” Diabetes Therapy: Research, Treatment and Education of Diabetes and Related Disorders, vol. 10, no. 5, pp. 1811–1822, 2019. View at: Google Scholar
  15. J. Sahlsten, J. Jaskari, J. Kivinen et al., “Deep learning fundus image analysis for diabetic retinopathy and macular edema grading,” Scientific Reports, vol. 9, no. 1, Article ID 10750, 2019. View at: Google Scholar
  16. M. M. Wessel, G. D. Aaker, G. Parlitsis, M. Cho, D. J. D’Amico, and S. Kiss, “Ultra-wide-field angiography improves the detection and classification of diabetic retinopathy,” Retina, vol. 32, no. 4, pp. 785–791, 2012. View at: Publisher Site | Google Scholar
  17. A. Nagiel, R. A. Lalane, S. R. Sadda, and S. D. Schwartz, “Ultra-widefield fundus imaging,” Retina, vol. 36, no. 4, pp. 660–678, 2016. View at: Publisher Site | Google Scholar
  18. T. Nagasawa, H. Tabuchi, H. Masumoto et al., “Accuracy of ultrawide-field fundus ophthalmoscopy-assisted deep learning for detecting treatment-naïve proliferative diabetic retinopathy,” International Ophthalmology, vol. 39, no. 10, pp. 2153–2159, 2019. View at: Publisher Site | Google Scholar
  19. Y. Jia, O. Tan, J. Tokayer et al., “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Optics Express, vol. 20, no. 4, pp. 4710–4725, 2012. View at: Publisher Site | Google Scholar
  20. N. Takase, M. Nozaki, A. Kato, H. Ozeki, M. Yoshida, and Y. Ogura, “Enlargement of foveal avascular zone in diabetic eyes evaluated by en face optical coherence tomography angiography,” Retina, vol. 35, no. 11, pp. 2377–2383, 2015. View at: Publisher Site | Google Scholar
  21. Y. Guo, T. T. Hormel, H. Xiong et al., “Development and validation of a deep learning algorithm for distinguishing the nonperfusion area from signal reduction artifacts on OCT angiography,” Biomedical Optics Express, vol. 10, no. 7, pp. 3257–3268, 2019. View at: Publisher Site | Google Scholar
  22. T. Nazir, A. Irtaza, Z. Shabbir, A. Javed, U. Akram, and M. T. Mahmood, “Diabetic retinopathy detection through novel tetragonal local octa patterns and extreme learning machines,” Artificial Intelligence in Medicine, vol. 99, Article ID 101695, 2019. View at: Publisher Site | Google Scholar
  23. T. Mita, T. Hiyoshi, H. Yoshii et al., “Japanese clinical practice guideline for diabetes 2016,” Journal of Diabetes Investigation, vol. 9, no. 3, pp. 657–697, 2018. View at: Publisher Site | Google Scholar
  24. F. Mosteller and J. W. Tukey, “Data analysis, including statistics,” Handbook of Social Psychology, vol. 2, pp. 80–203, 1968. View at: Google Scholar
  25. R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,,” in Proceedings of the 14th International Joint Conference on Artificial Intelligence, pp. 1137–1145, ACM, New York, NY, USA, 1995. View at: Google Scholar
  26. D. Nagasato, H. Tabuchi, H. Ohsugi et al., “Deep neural network-based method for detecting central retinal vein occlusion using ultrawide-field fundus ophthalmoscopy,” Journal of Ophthalmology, vol. 2018, Article ID 1875431, 6 pages, 2018. View at: Publisher Site | Google Scholar
  27. H. Masumoto, H. Tabuchi, S. Nakakura et al., “Accuracy of a deep convolutional neural network in detection of retinitis pigmentosa on ultrawide-field images,” PeerJ, vol. 7, Article ID e6900, 2019. View at: Publisher Site | Google Scholar
  28. K. Simonyan and Z. Andrew, “Very deep convolutional networks for large-scale image recognition,” 2014, https://arxiv.org/pdf/1409.1556.pdf. View at: Google Scholar
  29. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: a large-scale hierarchical image database,” in Proceedings of the 2009 IEEE Conference On Computer Vision and Pattern Recognition, pp. 248–255, Miami, FL, USA, 2009. View at: Google Scholar
  30. C.-Y. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu, “Deeply- supervised nets,” in Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, vol. 38, pp. 562–570, San Diego, CA, USA, 2015. View at: Google Scholar
  31. O. Russakovsky, J. Deng, H. Su et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015. View at: Publisher Site | Google Scholar
  32. X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, April 2011. View at: Google Scholar
  33. D. Scherer, A. Müller, and S. Behnke, “Evaluation of pooling operations in convolutional architectures for object recognition,” Artificial Neural Networks-ICANN 2010, Springer, Berlin, Germany, 2010. View at: Google Scholar
  34. P. Agrawal, R. Girshick, and J. Malik, “Analyzing the performance of multilayer neural networks for object recognition,,” in Proceedings of the European Conference on Computer Vision, pp. 329–344, Springer, Cham, Switzerland, 2014. View at: Google Scholar
  35. N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks, vol. 12, no. 1, pp. 145–151, 1999. View at: Publisher Site | Google Scholar
  36. Y. Nesterov, “A method for unconstrained convex minimization problem with the rate of convergence O (1/k2),” Proceedings of the USSR Academy of Sciences, vol. 269, pp. 543–547, 1983. View at: Google Scholar
  37. J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology, vol. 143, no. 1, pp. 29–36, 1982. View at: Publisher Site | Google Scholar
  38. C. J. Clopper and E. S. Pearson, “The use of confidence or fiducial limits illustrated in the case of the binomial,” Biometrika, vol. 26, no. 4, pp. 404–413, 1934. View at: Publisher Site | Google Scholar
  39. G. Dimitrova, E. Chihara, H. Takahashi, H. Amano, and K. Okazaki, “Quantitative retinal optical coherence tomography angiography in patients with diabetes without diabetic retinopathy,” Investigative Opthalmology & Visual Science, vol. 58, no. 1, pp. 190–196, 2017. View at: Publisher Site | Google Scholar
  40. B. D. Krawitz, S. Mo, L. S. Geyman et al., “Acircularity index and axis ratio of the foveal avascular zone in diabetic eyes and healthy controls measured by optical coherence tomography angiography,” Vision Research, vol. 139, pp. 177–186, 2017. View at: Publisher Site | Google Scholar
  41. T. Hirano, J. Kitahara, Y. Toriyama, H. Kasamatsu, T. Murata, and S. Sadda, “Quantifying vascular density and morphology using different swept-source optical coherence tomography angiographic scan patterns in diabetic retinopathy,” British Journal of Ophthalmology, vol. 103, no. 2, pp. 216–221, 2019. View at: Publisher Site | Google Scholar
  42. Early Treatment Diabetic Retinopathy Study Research Group, “Grading diabetic retinopathy from stereoscopic color fundus photographs—an extension of the modified Airlie House classification: ETDRS report number 10,” Ophthalmology, vol. 98, pp. 786–806, 1991. View at: Google Scholar
  43. P. S. Silva, J. D. Cavallerano, N. M. N. Haddad et al., “Peripheral lesions identified on ultrawide field imaging predict increased risk of diabetic retinopathy progression over 4 years,” Ophthalmology, vol. 122, no. 5, pp. 949–956, 2015. View at: Publisher Site | Google Scholar
  44. P. S. Silva, A. J. Dela Cruz, M. G. Ledesma et al., “Diabetic retinopathy severity and peripheral lesions are associated with nonperfusion on ultrawide field angiography,” Ophthalmology, vol. 122, pp. 246–2472, 2015. View at: Publisher Site | Google Scholar
  45. K. Tran and K. Pakzad-Vaezi, “Multimodal imaging of diabetic retinopathy,” Current Opinion in Ophthalmology, vol. 29, no. 6, pp. 566–575, 2018. View at: Publisher Site | Google Scholar
  46. F. Arcadu, F. Benmansour, A. Maunz et al., “Deep learning predicts OCT measures of diabetic macular thickening from color fundus photographs,” Investigative Opthalmology & Visual Science, vol. 60, no. 4, pp. 852–857, 2019. View at: Publisher Site | Google Scholar
  47. R. Kanjee, R. I. Dookeran, M. K. Mathen, F. A. Stockl, and R. Leicht, “Six-year prevalence and incidence of diabetic retinopathy and cost-effectiveness of tele-ophthalmology in Manitoba,” Canadian Journal of Ophthalmology, vol. 51, no. 6, pp. 467–470, 2016. View at: Publisher Site | Google Scholar
  48. N. Lois, J. Cook, S. Aldington et al., “EMERALD study group, “effectiveness of multimodal imaging for the evaluation of retinal oedema and new vesseLs in diabetic retinopathy (EMERALD),” BMJ Open, vol. 9, Article ID e027795, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Toshihiko Nagasawa et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views290
Downloads250
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.