Generative Adversarial Networks for Multi-Modal Multimedia ComputingView this Special Issue
DeepLab and Bias Field Correction Based Automatic Cone Photoreceptor Cell Identification with Adaptive Optics Scanning Laser Ophthalmoscope Images
The identification of cone photoreceptor cells is important for early diagnosing of eye diseases. We proposed automatic deep-learning cone photoreceptor cell identification on adaptive optics scanning laser ophthalmoscope images. The proposed algorithm is based on DeepLab and bias field correction. Considering manual identification as reference, our algorithm is highly effective, achieving precision, recall, and score of 96.7%, 94.6%, and 95.7%, respectively. To illustrate the performance of our algorithm, we present identification results for images with different cone photoreceptor cell distributions. The experimental results show that our algorithm can achieve accurate photoreceptor cell identification on images of human retinas, which is comparable to manual identification.
Vision is one of the most important human senses. Unfortunately, as a major cause of blindness, retinopathy has become increasingly common. Most retinopathy patients can prevent blindness with early diagnosis and treatment, which provide promising outcomes. Although optical imaging allows observing the retina, higher-resolution imaging is required for the early diagnosis of retinopathy. However, ocular aberrations limit the resolution of optical imaging. To address this limitation, adaptive optics (AO), which was originally intended for removing aberrations caused by atmospheric instability , has been used to correct ocular aberrations in retinal imaging [2–4]. AO allows the resolution of in vivo retinal imaging to reach the cellular level [4–6]. In particular, AO scanning laser ophthalmoscopy (AO-SLO) uses an integrated AO for clearly imaging cone photoreceptor cells . Thus, AO-SLO allows to observe pathological changes in the distribution of photoreceptor cells on the retina, thus, outperforming other retinal imaging techniques in the diagnosis of certain diseases characterized by disorders in the distribution of cone photoreceptor cells [7–11].
To quantitatively calculate the distribution of cone photoreceptor cells, individual cells should be identified. Although manual identification of cone photoreceptor cells is reliable, it is time-consuming and subjective. Therefore, semiautomatic and automatic algorithms for cone photoreceptor-cell identification have been devised [12–26]. They can be nonlearning-based algorithms [12–18], supervised-learning algorithms [19–23], and unsupervised-learning algorithms [24–26]. Among them, supervised deep-learning algorithms have achieved the highest accuracy, thus, being a promising research direction given their potentially high performance.
In 2014, Google introduced a supervised deep-learning semantic segmentation model called DeepLab . With remarkable advantages, DeepLab has become a hot topic in research and engineering [28–33], and one of its popular variants, DeepLab v3 , has been widely used in medical image processing [35–41]. We propose an automatic cone photoreceptor cell identification algorithm based on DeepLab v3 for AO-SLO images. The proposed algorithm also uses bias field correction  to further improve the identification accuracy. To confirm the effectiveness of the proposed algorithm, we determined various evaluation measures (i.e., precision, recall, and score) with respect to manual identification, which is considered as the reference providing the ground truth. The performance of the proposed algorithm is further demonstrated by showing cone photoreceptor-cell identification results for AO-SLO images with different cell distributions.
Figure 1 shows the outline of the proposed deep-learning cone photoreceptor-cell identification algorithm with its main steps of (1) training, (2) testing, and (3) postprocessing. First, the training dataset that includes AO-SLO images and their corresponding segmented images is used to train DeepLab . Second, the bias-field-corrected images obtained from the test dataset after applying bias field correction  are input to the trained DeepLab  to generate segmented test images. Third, the bias-field-corrected images and segmented test images are processed by threshold-based algorithm to obtain finely segmented images to identify individual cone photoreceptor cells by calculating their centroids.
To achieve a fine segmentation of cone photoreceptor cells, we magnified the training AO-SLO images and their corresponding segmented images four times isotropically before training segmentation. In detail, the training AO-SLO images were interpolated using the antialiasing mode to obtain high-quality images, and the corresponding segmented images were interpolated using the nearest mode for binarization. Both interpolation operations are available in Python Imaging Library. Then, DeepLab v3  with its ResNet-101 backbone pretrained on the ImageNet dataset was trained using the magnified images. In the training images, the area of the cone photoreceptor cells is larger than that of the background. To compensate for such imbalance, we introduced a cross-entropy loss function that weights the cone photoreceptor cells (0.3) and background (0.7) separately. During training, we set the batch size and number of epochs to 2 and 100, respectively. The outline of the training process is shown in Figure 2.
The direct usage of the trained DeepLab v3 to segment four-time magnified test AO-SLO images can cause failure with high probability. A representative example of a failure case is shown in Figure 3, where segmentation is based on local intensity bias instead of cone photoreceptor cells, leading to segmentation failure.
To solve this problem, we applied bias field correction to the AO-SLO images. First, a bias field image is generated by applying a Gaussian filter whose sigma value is 22 pixels length to the AO-SLO image :
Second, the AO-SLO image is corrected by extracting the bias field image :
Third, the four-time magnified bias-field-corrected image is input to the trained DeepLab, and the segmentation results are obtained. The outline of the testing process is shown in Figure 4.
Figure 5 shows that some cone photoreceptor cells are merged after DeepLab segmentation. To mitigate this problem, we applied thresholding to the bias-field-corrected images . The intensity values in the DeepLab segmentation mask were first extracted from the bias-field-corrected image. Then, the mean intensity value was calculated and used as the threshold to segment the bias-field-corrected image. Through thresholding, cone photoreceptor cells were identified in two steps. In detail, the contours of the segmentation results were extracted using function find Contours of OpenCV, and the centroids of the areas inside the contours were then considered as identified cone photoreceptor cells. A representative example of postprocessing is shown in Figure 6, where adjacent cell merging is mostly solved, and individual cone photoreceptor cells are accurately identified.
We evaluated the proposed algorithm on a publicly available dataset  that contains 840 AO-SLO images and their corresponding cone photoreceptor cell segmentation results as ground truth . We used 800 AO-SLO images for the training dataset, and the remaining 40 images for the test dataset. The automatic processing took 2.95 hours for training with two batch sizes over 100 epochs, 8.77 s for testing, and 0.76 s for postprocessing. These computation times were obtained on a computer running 64-bit Python and equipped with an Intel Core i7-10870H processor (2.20 GHz), 16.0 GB RAM, and NVIDIA GeForce RTX 2060 graphics card.
To confirm the effectiveness of the proposed algorithm for cone photoreceptor cell identification, we evaluated its identification performance regarding three measures, namely, precision, recall, and score, with respect to the manual identification results taken as reference. The overall precision, recall, and score for identification are listed in Table 1, where the values are compared with those of several algorithms [15, 18, 25, 26]. The proposed algorithm achieves accurate cone photoreceptor cell identification, outperforming the comparison algorithm [18, 25, 26] except the graph theory-based algorithm  which is often referred to as ground truthing cone photoreceptor cell identification but needs a large amount of computing and complex implementation.
To illustrate the performance of the proposed algorithm, Figure 7 shows cone photoreceptor cell identification results for different cone photoreceptor cell distributions on AO-SLO images. The cone photoreceptor cells are accurately identified on the three AO-SLO images with different distributions.
In semantic segmentation, the relationship between the target segmentation objects and background is usually complex. Cone photoreceptor cell identification is relatively simple: (1) only one type of object, a cone photoreceptor cell, should be segmented; (2) cone photoreceptor cells do not contain rich texture details. Thus, an algorithm can segment the images according to area-based information. As the target area containing the cone photoreceptor cells is much larger than the area in general semantic segmentation, DeepLab algorithm is trained with bias if the cone photoreceptor cells and background are weighted equally. To prevent bias, we designed a cross-entropy loss function with a smaller weight given to cone photoreceptor cells.
In general, supervised deep-learning algorithms provide higher accuracy than nonlearning-based and unsupervised-learning algorithms. Therefore, automatic algorithms for the accurate identification of cone photoreceptor cells on AO-SLO images can be developed by applying and modifying deep learning algorithms, which have demonstrated high-performance image segmentation and identification but have not yet been used for cone photoreceptor cell identification. In this regard, we presented the modified versions of three famous methods [43–45] as promising solutions for developing automatic and accurate cone photoreceptor cell identification algorithms on AO-SLO images.
We propose an automatic deep-learning algorithm for the identification of cone photoreceptor cells on AO-SLO images. The algorithm implements DeepLab v3 and bias field correction as its core techniques. To confirm the effectiveness of the proposed algorithm, we obtained its precision, recall, and score with respect to manual identification, obtaining values of 96.7%, 94.6%, and 95.7%, respectively. Furthermore, to illustrate the performance of the proposed algorithm, we obtained the cone photoreceptor cell identification results for different cone photoreceptor cell distributions on AO-SLO images.
The original dataset used in this paper is a publicly available dataset which can be obtained online (http://people.duke.edu/~sf59/Chiu_BOE_2013_dataset.htm) . However, our source codes are not publicly available due to them containing information that could compromise research participant privacy.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
This work was supported in part by the Natural Science Foundation of Jiangsu Province (BK20200214), National Key R&D Program of China (2017YFB0403701), Jiangsu Province Key R&D Program (BE2019682, BE2018667), National Natural Science Foundation of China (61605210, 61675226, 61378090), Youth Innovation Promotion Association of Chinese Academy of Sciences (2019320), Frontier Science Research Project of the Chinese Academy of Sciences (QYZDB-SSW-JSC03), and Strategic Priority Research Program of the Chinese Academy of Sciences (XDB02060000).
Y. Kitaguchi, K. Bessho, T. Yamaguchi, N. Nakazawa, T. Mihashi, and T. Fujikado, “In vivo measurements of cone photoreceptor spacing in myopic eyes from images obtained by an adaptive optics fundus camera,” Japanese Journal of Ophthalmology, vol. 51, no. 6, pp. 456–461, 2007.View at: Publisher Site | Google Scholar
J. Lammer, S. G. Prager, M. C. Cheney et al., “Cone photoreceptor irregularity on adaptive optics scanning laser ophthalmoscopy correlates with severity of diabetic retinopathy and macular edema,” Investigative Ophthalmology & Visual Science, vol. 57, no. 15, pp. 6624–6632, 2016.View at: Publisher Site | Google Scholar
S. Nakatake, Y. Murakami, J. Funatsu et al., “Early detection of cone photoreceptor cell loss in retinitis pigmentosa using adaptive optics scanning laser ophthalmoscopy,” Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 257, no. 6, pp. 1169–1181, 2019.View at: Publisher Site | Google Scholar
A. Turpin, P. Morrow, B. Scotney, R. Anderson, and C. Wolsley, “Automated identification of photoreceptor cones using multi-scale modelling and normalized cross-correlation,” in Image Analysis and Processing – ICIAP 2011. ICIAP 2011. Lecture Notes in Computer Science, vol 6978, G. Maino and G. L. Foresti, Eds., pp. 494–503, Springer, Berlin, Heidelberg.View at: Publisher Site | Google Scholar
D. Cunefare, A. L. Huckenpahler, E. J. Patterson, A. Dubra, J. Carroll, and S. Farsiu, “RAC-CNN: multimodal deep learning based automatic detection and classification of rod and cone photoreceptors in adaptive optics scanning light ophthalmoscope images,” Biomedical Optics Express, vol. 10, no. 8, pp. 3815–3832, 2019.View at: Publisher Site | Google Scholar
D. Cunefare, L. Fang, R. F. Cooper, A. Dubra, J. Carroll, and S. Farsiu, “Open source software for automatic detection of cone photoreceptors in adaptive optics ophthalmoscopy using convolutional neural networks,” Scientific Reports, vol. 7, no. 1, pp. 1–11, 2017.View at: Publisher Site | Google Scholar
L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFS,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018.View at: Publisher Site | Google Scholar
A. Subramanian and K. Srivatsan, Exploring Deep Learning Based Approaches for Endoscopic Artefact Detection and Segmentation, EndoCV@ ISBI, 2020.
C.-H. Huang, W.-T. Xiao, L.-J. Chang, W.-T. Tsai, and W.-M. Liu, “Automatic tissue segmentation by deep learning: from colorectal polyps in colonoscopy to abdominal organs in CT exam,” in 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, December 2018.View at: Publisher Site | Google Scholar