Abstract
Optical coherence tomography is a high resolution, rapid, and noninvasive diagnostic tool for angle closure glaucoma. In this paper, we present a new strategy for the classification of the angle closure glaucoma using morphological shape analysis of the iridocorneal angle. The angle structure configuration is quantified by the following six features: (1) mean of the continuous measurement of the angle opening distance; (2) area of the trapezoidal profile of the iridocorneal angle centered at Schwalbe's line; (3) mean of the iris curvature from the extracted iris image; (4) complex shape descriptor, fractal dimension, to quantify the complexity, or changes of iridocorneal angle; (5) ellipticity moment shape descriptor; and (6) triangularity moment shape descriptor. Then, the fuzzy k nearest neighbor (fkNN) classifier is utilized for classification of angle closure glaucoma. Two hundred and sixtyfour swept source optical coherence tomography (SSOCT) images from 148 patients were analyzed in this study. From the experimental results, the fkNN reveals the best classification accuracy () and AUC () with the combination of fractal dimension and biometric parameters. It showed that the proposed approach has promising potential to become a computer aided diagnostic tool for angle closure glaucoma (ACG) disease.
1. Introduction
The detection of the angle closure glaucoma is important for preventing irreversible blindness. Since vision loss from glaucoma cannot be recovered, improved screening and detection methods for the angle closure glaucoma are essential to preserve vision and maintain a good quality of life. Although glaucoma commonly progresses to blindness over many years, acute angle closure can result in permanent blindness in a matter of hours. Studies suggest that glaucomatous optic neuropathology can be prevented when effective prophylactic treatment such as laser peripheral iridotomy is performed at an early and appropriate time for the eyes with anatomically narrow angles [1]. Angle closure glaucoma is characterized by obstruction of aqueous fluid drainage through the trabecular meshwork from the eye’s anterior chamber. The width of the angle is one factor affecting the drainage of aqueous humor. A wide unobstructed iridocorneal angle allows sufficient drainage of aqueous humor, whereas a narrow angle may impede the drainage system and leave the patient susceptible to angle closure glaucoma.
The imaging of angle between the iris and the cornea is the key for open angle and closed angle glaucoma diagnosis. Early detection of ACG using imaging technology has recently gained much clinical interest. Gonioscopy [2], is considered the gold standard technique to examine the iridocorneal angle, but it is subjective and dependent on the operator. OCT is a high resolution, rapid, and noninvasive screening tool for angle closure glaucoma and its technology has evolved rapidly from timedomain to spectraldomain. Swept source OCT (Casia SS1000) [3] is a newly developed novel imaging technology which provides a detailed examination of the structures of anterior chamber (AC). It is a Fourierdomain system and is designed specifically for imaging the anterior segment. With a substantial improvement in scan speed (30,000 Ascans per second), the anterior chamber angles can be imaged in 128 crosssections (each with 512 Ascans) 360 degree around the anterior segment in 1.2 seconds. Highdefinition SSOCT (SSOCT) imaging has the potential to become an important tool for the assessment of the anterior chamber angle and detection of angle closure [4, 5].
Anterior segment SSOCT imaging has significantly altered the diagnosis and evaluation of ACG. The information gained with new imaging modalities provides clinicians with both qualitative and quantitative information about anatomical structure of the anterior chamber. Hu et al. [6] compared gonioscopy with Visante and Cirrus optical coherence tomography (OCT) for identifying angle structures and the presence of angle closure in patients with glaucoma. Several techniques to assess the anterior chamber are based on manual or automatic detection of some landmarks such as scleral spur (SS) [7] and Schwalbe’s line (SL) [8, 9]. However, the existing methods use a single distance or area, for example, AOD500, TISA500, and AOD_{sl}, for measurement of iridocorneal angle, without consideration of the whole angle profile. Cheung et al. [10] showed that an irregular iris surface made the AOD_{sl} very inaccurate. Our previous study [11] measured two new parameters, mAOD and AT_{sl}, based on the continuous AOD to assess the anterior chamber angle. These two new parameters tried to overcome the limitations in the single measurement of AOD when the iris surface is irregular and the angle is more occludable [10].
In addition to these two parameters, the iris curvature must also be considered. The iris curvature of anterior segment OCT (ASOCT) is usually calculated by drawing a line from the most peripheral to the most central points of the iris pigment epithelium [12], and then a perpendicular line is extended from this line to the iris pigment epithelium at the point of the greatest convexity. However, the iris pigment epithelium cannot be exactly detected, only the stoma. The segmentation of the iris lower boundary is challenging due to the low contrast in this region. The nature of the angle dynamics which has not been considered yet in the previous studies, still involved in different forms of angle closure or open angle glaucoma. Hence, we propose to measure the iris curvature denoted by from the extracted iris image.
Besides, we assume that the iridocorneal angle is the irregular shape and the complexity or changes of the iridocorneal angle can be measured quantitatively and qualitatively with shape descriptors. The shape analysis might give more freedom in computation and is less sensitive to the accurate detection of the landmarks—the scleral spur (SS) and Schwalbe’s line (SL). Using this approach, the shape analysis may be successfully applied both quantitatively and objectively to characterize angle shape of anterior chamber of SSOCT images. An important property of shapes is their complexity. The complex and erratic shape description in terms of selfsimilarity was introduced by Mandelbrot [13]. The concept of fractal dimension (FD) is useful in the measurement, analysis, and classification of shape and texture. The fractal dimension, therefore, might serve as a sensitive descriptor of the iridocorneal angle shape. Fractals provide a new field for characterization of irregularity and complexity, yet selfsimilar structures in nature.
However, although FD has been used extensively in characterizing selfaffinity in various kinds of biomedical research [13, 14], little attention has been paid in automated glaucoma subtype classification using the feasibility of fractal and multifractal theory on retinal nerve fiber layer (RNFL) and optic disc [15]. Another approach in shape analysis is founded on convexity [16]. The measurement of moment invariants like the triangularity () and ellipticity () [17] was successfully used in several applications like the classification of mammographic masses and lung field boundaries [18]. Both shape analyses are a translation, rotation, and scale of the object and general affine transformations [19]. Therefore, the paper proposes a new strategy to analyze the iridocorneal angle by morphological shape analysis as well as biometric angle parameters.
In this paper, we present an automatic angle closure glaucoma detection system based on machine learning and image analysis for the estimation of quantitative parameters to classify SSOCT images into two classes: open angle and angle closure glaucoma. We propose new strategies of feature extraction methods: (1) based on measurement of biometric parameters and (2) based on the shape analysis of iridocorneal angle to capture much more information.
The rest of the paper is organized as follows. Section 2 describes the overview of the algorithm for preprocessing, feature extractions, and classification method of angle closure glaucoma. The experiment and results are presented in Section 3. Finally Section 4 concludes the paper with the future work.
2. Methods
The architecture of the overall proposed system is shown in Figure 1. It mainly consists of four steps: preprocessing, segmentation of anterior chamber, anterior chamber angle analysis for feature extractions, and classification of angle closure and open angle glaucoma. This section starts by defining the notation that will be used throughout the paper and Figure 2 shows the region of interest for iridocorneal angle analysis.
Firstly, the original image denoted by is preprocessed to remove the vertical saturation artifacts as in [11] and the processed image is denoted by , where are the pixels coordinates and is the size of the image; then is segmented to detect the cornea, the iris, and the anterior chamber. The lower boundaries of the cornea and inner boundary of the iris are detected using the same approach as the method in [9], and the iris image denoted by is extracted. The six features are extracted to quantify the anterior chamber; the anterior chamber assessment parameters are mAOD, AT_{sl}, the mean curvature of the iris , and shape descriptors FD, , and . Lastly, the classification step is performed. The short descriptions of each step are presented in the following subsequent sections. The extended description of each step can be found in the appendices.
2.1. Notation
The definitions of the variables used in this paper are listed here:(1)original gray scale image: , where ;(2)the preprocessed image: , where ;(3)the segmented corneal endothelium, the anterior surface of the iris, and anterior chamber mask in : , , and ;(4)the extracted iris image: ;(5)automatically detected Schwalbe’s line: SL;(6)angle opening distance: AOD;(7)mean of the continuous measurement of the angle opening distance: mAOD;(8)area of the trapezoidal profile of the iridocorneal angle centered at Schwalbe’s line: AT_{sl};(9)mean of the iris curvature from the extracted iris image; ;(10)the extracted iridocorneal angle image: , where ;(11)complex shape descriptor to quantify the complexity or changes of iridocorneal angle: fractal dimension (FD);(12)ellipticity moment shape descriptor: ;(13)triangularity moment shape descriptor: .
2.2. Preprocessing of SSOCT Images
It is observed that some SSOCT images contain vertical saturation artifacts, which hinder the accurate interpretation of the image if thresholding and component labeling are used for the segmentation. Hence, reducing and removing the artifacts are performed prior to the segmentation step. As illustrated in Figure 3(a), the vertical saturation artifacts are marked by sudden increases in intensity as compared to surrounding area and are caused by the saturation of intensity due to the strong reflection in certain locations. Therefore, we detected such artifacts by searching for abrupt changes in the average intensity of each column as shown in Figure 3(b).
(a)
(b)
2.3. Segmentation of Anterior Chamber and Iris
After removing the vertical saturation artifacts, the segmentation was performed in the SSOCT images. The method consists of segmenting the anterior chamber, the cornea, and the iris and extracting their edges. The anterior chamber region, the lower cornea boundary, and the upper iris boundary are extracted by image segmentation method. The details of segmentation can be found in appendices.
2.4. Feature Extractions
This section describes the analysis of ACA for the extraction of six features by the biometric parameter measurement and shape analysis.
2.4.1. Measurement of Biometric Parameters
The biometric parameters, previously proposed in [11], from the iridocorneal angle based on the continuous measurement of AOD are quantified as follows.(i)mAOD: the average of continuous serial AOD measured every 25 m away from the SL till 500 m in both directions (anterior and posterior to/from Schwalbe’s line) as shown in Figure 4(a).(ii)AT_{sl}: the trapezoidal area of iridocorneal angle bounded by the angle recess, AOD_psl line, corneal endothelium, and anterior surface of the iris as shown in Figure 4(a).(iii): the mean iris curvature measured from the whole iris image as shown in Figure 4(b).
(a)
(b)
2.4.2. Anterior Chamber Shape Analysis
After calculating the biometric parameters, we performed anterior chamber shape analysis on the selected region of iridocorneal angle image as shown in Figure 5. We utilized the fractal complex shape descriptor on the iridocorneal angle image denoted by , where are the pixels coordinates and is the size of the extracted angle image. Moment shape descriptors such as the triangularity and ellipticity are used to compare the performance of fractal dimension analysis. The angle structure configuration is then quantified by the following three features:(1)complex shape descriptor to quantify the complexity or changes of iridocorneal angle, fractal dimension (FD);(2)ellipticity moment shape descriptor ();(3)triangularity moment shape descriptor ().
2.5. Classification
The ability of each feature extraction method to separate open angle and angle closure glaucoma is quantified by the fuzzy knearest neighbour classifier [20]. The basis of the fuzzy kNN algorithm is to assign membership as a function of the vector’s distance from its knearest neighbors and those neighbors’ memberships in the possible classes.
3. Experiments and Results
The Singaporean Chinese population study by the Singapore National Eye Centre recruited 148 subjects (90 females and 45 males) with average age of . All subjects underwent a standard examination of dark room gonioscopy and the anterior segment imaging by SSOCT on the same day. The Casia SS1000 OCT (Tomey, Nagoya, Japan) was used as the imaging modality to visualize the anterior segment of the eye. We used the 2D angle highdefinition (HD) mode of SSOCT imaging with the (8 mm, 8 mm) scan dimension. In the HD scan mode, the identification of both the SL and SS was possible in over 98% of SSOCT images [21]. The ACA was graded using the modified Shaffer grading system in each SSOCT image. The doctor examined four different quadrants of the eye, namely, inferior, superior, nasal, and temporal (I, S, N, and T) scanning. A subset of 29 subjects (23.4%) was imaged bilaterally to assess the differences between eyes.
We selected the images with two criteria: (1) SL could be identified automatically and (2) only one image from the four scans of each eye, for less bias for classifier. So, in total 264 SSOCT HD images in which SL could be seen were selected from the dataset for further analysis. They were 132 nasal scan images, 70 temporal scan images, 29 superior scan images, and 33 inferior scan images. Using gonioscopic Shaffer grading as the gold standard, 135 images with closed angle ( grade) and 129 images with open angle ( grade) were evaluated for the analysis. The vertical saturation artifacts were presented in 77 images (46%) out of 264 images in the dataset on various regions of image. So, it is necessary to perform artifact removal for affected images prior to image segmentation. Figure 6 shows the segmentation results of with and without artifact removal in SSOCT image. From the results as shown in Figure 6, we found that the preprocessing steps relatively improved the segmentation of the anterior chamber region.
(a)
(b)
Figure 7(a) shows the segmentation results of the anterior chamber, the upper cornea, the lower cornea, and the inner boundary of the iris. The extracted iris image is shown in Figure 7(b) which yields the iris curvature of . Figure 7(c) shows an example of the resulting angle profile consisting of 40 continuous AOD centered on SL which reveals mm for mAOD and mm^{2} for AT_{sl}, respectively. Then, we find the region of interest for the fractal dimension and the moment shape analysis as shown in Figure 7(d). The results of the triangularity and the ellipticity moment shape analysis are 0.899 and 0.165, respectively, as shown in Figure 7(e). Figure 7(f) shows the fractal image of the region of interest of Figure 7(d) and its estimated average FD is 1.862.
(a)
(b)
(c)
(d)
(e)
(f)
The comparison of the iridocorneal angle features between open angle and angle closure glaucoma SSOCT images using gonioscopic grading as a reference is illustrated in Table 1. The fractal dimensions of open and closed angle are 1.87 and 1.84, respectively, as shown in Table 1. The results of fractal dimension depend slightly on the specific images of open angle and angle closure glaucoma. All the features of closed angle SSOCT images are smaller than those of the open angle SSOCT images. The coefficients of variance (COVs) of FD are smaller in both open and closed angle images. The COVs of are the largest in both open and closed angle SSOCT images. So, results are considered to be high variance and do not necessarily correspond to closed and open angle images in this study. Figure 8 shows the scatter plot of FD, mAOD, and AT_{sl} features for both open angle and closed angle SSOCT images. After the feature vectors were computed, training, crossvalidation, and testing sets were formed by 1476 vectors (264 images × 6 features): 3 biometric features (mAOD, AT_{sl}, and ) and 3 shape descriptors (FD, , and ).
(a)
(b)
Then, we perform the feature selections from all features for improving classification accuracy or decreasing the size of the structure without significantly decreasing classification accuracy of the classifier which is built using only the selected features [22]. Reducing the number of irrelevant/redundant features drastically reduces the running time of a learning algorithm and yields a more general concept. This helps in getting a better insight into the underlying concept of a realworld classification problem [23]. Feature selection methods try to pick a subset of features that are relevant to the target concept and these features are used as the input of classifiers.
The association between the gonioscopic grading and the measured iridocorneal angle features was evaluated for feature selection using Spearman correlation coefficient (). There is a high correlation between FD, mAOD, and AT_{sl} and gonioscopic grading as shown in Table 2. We also explored the effect of combining different kinds of features on classifier performance. So, we grouped those features for classification and compared the consistency with other separate features. In the system, these features were firstly normalized to mean zero and variance of one to improve the classification process. Then, the classification was performed based on the normalized features by fuzzy knearest neighbor classifier (fkNN). The performance of the fkNN was evaluated by comparison with widely used machine learning algorithms, namely, linear discriminant analysis (LDA), knearest neighbour (kNN), and support vector machines (SVM), to verify the effectiveness of the proposed model.
The classification was performed on the 5fold crossvalidation (5fold CV) of various combinations of features. To verify the effectiveness of the proposed fkNN classifier, we firstly find the relationship between the classification performance and the fuzzy strength parameter which varies from 1 to 2, with the step size of 0.1. It can be observed that the accuracy was achieved between 93% and 99% and AUC value fluctuates between 87% and 99% based on the feature groups as shown in Figures 9(a) and 9(b). It reveals that fuzzy strength parameter has a big impact on the performance of fkNN classifier. The best classification performance was achieved with the parameter of 1.2.
(a)
(b)
The average 5fold CV accuracy and the corresponding standard deviation of fkNN are shown in Table 3. The AUC ranges from to . The biometric features and the fractal feature are performed with a similar accuracy. The classification using FD, mAOD, and AT_{sl} features group yields the best performance accuracy and AUC . These findings suggest that the proposed method is highly promising in providing accurate diagnosis tools for ACG and will make a greater clinical impact if the study can be done on a larger image database.
We also evaluated the performance of fuzzy kNN by using only three features (FD, mAOD, and AT_{sl}) with other classification methods (LDA, kNN, and SVM). All classification algorithms provided a correct classification rate higher than 81%. SVM slightly outperforms LDA with an accuracy of ( sensitivity and specificity) and an AUC of was reached. kNN was found to outperform LDA as well as SVM with an accuracy of ( sensitivity and specificity) and AUC value. Minimum differences can be appreciated among LDA, kNN, and SVM classifiers. The best classification performance was provided by the fuzzy kNN classifier with fuzzy membership of the samples (18%) better than other classifers (LDA, kNN, and SVM) as in Table 4. This is an indication of how important the fuzziness of the membership function is. Additionally, fkNN has been shown to be an efficient and robust classification method to diagnose angle closure glaucoma in SSOCT images.
The proposed fractal shape descriptor also showed better accuracy in comparison to the moment shape descriptor method. The moment shape descriptor method is shown in the literature as an efficient technique to obtain shape descriptors. However, the results presented in this work suggest that the fractal analysis is a worthy option for providing shape descriptors for classification tasks, as it is invariant to rotation, translation, and scale. Besides, our fractal dimension analysis is performed for each pixel on gray level images that can capture more detailed information of the angle structure. In addition, that fractal shape analysis gives more freedom in computation and is less sensitive to the accurate detection of the landmarks—the scleral spur (SS) and Schwalbe’s line (SL).
Moreover, our classification results are comparable to those of Xu et al. [24]. They achieved the classification accuracies of AUC and balanced accuracy at an 85% specificity using histogram equalized pixel (HEP) values and SVM classification in OCT images. We also observed a significant advantage in terms of classification performance of using the fuzzy kNN algorithms in glaucoma diagnosis.
In summary, the proposed framework has been shown to be a useful tool in screening for ACG. While our system can provide useful classification and support to the medical experts through identification of features, human intervention to exploit the extracted knowledge is strongly recommended. We believe that our findings in this study can serve as a basic grading system for angle closure glaucoma diagnosis in the future.
4. Conclusions
We proposed a novel automated angle classification system using SSOCT images. We evaluated several techniques for extracting useful information from SSOCT images such as traditional biometric parameter measurement and complex shape descriptors using fractal dimension analysis. The experimental results demonstrated that the proposed technique which is the combination of biometric parameter, fractal dimension analysis, and classification by fuzzy kNN method achieved great accuracy in the classification of the open and closed angle glaucoma images with an accuracy rate of and AUC value. The performance of the fully automatic system presented here is comparable to medical experts in detecting glaucomatous eyes and could argue clinicians’ diagnosis of angle closure glaucoma.
Appendices
These appendices deal with the mathematical formulas and explanations of the methodology of SSOCT images analysis for angle closure detection. We include here Appendices A, B, C, and D for the SSOCT image preprocessing, segmentation of anterior chamber, feature extraction from SSOCT images, and classification of open and closed angle detection, which are important parts of the analysis stream for ACG detection.
A. Preprocessing of SSOCT Images
We detected the saturation artifacts by searching for abrupt changes in the average intensity of each column of original image . The average intensity of each column is defined by which are compared with the average intensity of the image given by Therefore, we can define the artifact region by where is the threshold value and is set to 15. Then, the artifacts are removed by using
B. Segmentation of Anterior Chamber and Iris
The anterior chamber region, the lower cornea boundary, and the upper iris boundary are extracted by image segmentation method. First, we convert the artifact removal SSOCT image of dimension to a binary image defined by where the threshold can be calculated by Otsu’s method [25], in which the intraclass variance is minimized and the interclass variance is maximized. That binary image is composed of two sets of pixels, the foreground object () and the background object (). The cornea and the iris can be separated by the connected component labeling [26] segmentation method. The basic idea of this method is to scan the image and group its pixels into components based on connectivity and assign each component a unique label. The segmentation and edge detection algorithm of HDOCT images depends on whether the cornea is connected or disjointed from the iris as in Tian et al. [9].
The automatic segmentation algorithm of anterior chamber () for connected SSOCT image is shown in Algorithm 1. The segmentation of disjoint SSOCT images was performed as in Tian et al. [9]. For connected SSOCT images as shown in Figure 10(a), the background component between the cornea and iris is identified as the anterior chamber () from the binary image by excluding the background component higher than the iris tip point (, ) along the axis as shown in Figure 10(b). The iris tip is the point which is the maximum value in the lower boundary of the foreground object () as seen in Figure 10(c). The segmented anterior chamber of is illustrated in Figure 10(c). Then, the edge of the upper boundary of the is denoted by and lower boundary of the is denoted by which are extracted as shown in Figure 10(d).

(a)
(b)
(c)
(d)
(e)
Then, the location of Schwalbe’s line which is the point at which there is a maximum distance between the points on the cornea and the regression line is automatically detected as in [9]. The iris image denoted by is also extracted from the image by using the following equation: It is just cropping the original image from the start point to the end point of the inner iris, , in the horizontal axis and from the 10 pixels before the start point of the inner iris to the end of the SSOCT image in the vertical axis to calculate the iris curvature as shown in Figure 10(e).
C. Feature Extraction of SSOCT Images
This section describes the analysis of ACA for the extraction of six features by the biometric parameter measurement and shape analysis.
C.1. Measurement of Biometric Parameters
The mean iris curvature measured from the whole iris image can be quantified as follows.
Let be the iris image, where are the pixels coordinates and is the size of the iris image; the curvature denoted by of the iris image is defined in terms of derivatives as where, , , , and are the derivatives in the  and direction of the iris image. Then, the mean curvature of the iris () image is calculated by averaging the principle curvature as follows:
C.2. Anterior Chamber Shape Analysis
The following subsections briefly explain the shape descriptors.
C.2.1. Fractal Complex Shape Descriptor (FD)
The fractal dimension for 2D gray level images quantified by differential box counting method (DBCM) was proposed by Sankar and Thomas [27]. The following section reviews the DBC method of FD estimation.
For each pixel in the image, we compute the fractal dimension of a small window surrounding the pixel . We assigned this fractal dimension to that pixel. We divide into . DBC presumes that the image belongs to a 3D space , where denotes the twodimensional positions and denotes the gray level of the image. The image with size was scaled down by boxes with size to many overlapping grids. If the total number of gray levels is , then where refers to the compressibility factor, indicates the height of the box, and denotes the range of the gray level in one grid. Supposing that the maximum and the minimum gray values of the image in pixel th grid fall in the box numbers th and th box, respectively, denotes the number of needed boxes to cover the grid. Taking contribution from all grids, we have The value of is determined by the different values of box sizes . After selecting different values of and finding , the fractal dimension FD can be obtained by the following equation:
C.2.2. Moment Shape Descriptors ( and )
The triangularity () and ellipticity () moment shape descriptors [17] are computed as follows:(1)The moment of the iridocorneal angle image denoted by can be computed by (2)The central moments can be computed by the following equation: where , .(3)Compute the simplest invariants (4)The triangularity and the ellipticity can be computed by
D. Classification
Let be the training set of sample vectors and let be a new vector considered as the input to be classified. For a fixed value of , the first step consists of identifying, among these sample vectors, the nearest neighbours of the input . Then, the membership vectors of the selected labeled samples are combined to find the membership vector of the input , where the membership vector describes the probabilities of the membership to the possible classes. Let be the membership of the input to the th class (with ) and let be the membership of its th neighbour to the same class (); the assigned memberships are given as follows (with ): According to (D.1), the assigned memberships of are influenced by the inverse of the distances from the nearest neighbors and their class memberships . The variable determines how heavily the distance is weighted when calculating each neighbor’s contribution to the membership value. If , then the contribution of each neighboring point is weighted by the reciprocal of its distance from the point being classified. As increases, the neighbors are more evenly weighted, and their relative distances from the point being classified have less effect. If , the closer neighbors are weighted far more heavily than those farther away. The in (D.1) denotes the Euclidean distance.
The test performance of the classifiers could be determined in terms of classification accuracy, sensitivity, specificity, and area under the receiver operating curve (AUC) as the following equation: where TP and TN denote the number of true positives and negatives, respectively, and FP and FN denote the number of false positives and negatives, respectively.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to thank Professor Tin Aung, Dr. Mani Baskaran, and technicians at Singapore Eye Research Institute for supplying the dataset of SSOCT images. This work was supported by the Biomedical Research Council (BMRC) grant, A*star, Singapore.