Research Article  Open Access
Hossein Rabbani, Rahele Kafieh, Mahdi Kazemian Jahromi, Sahar Jorjandi, Alireza Mehri Dehnavi, Fedra Hajizadeh, Alireza Peyman, "Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation", International Journal of Biomedical Imaging, vol. 2016, Article ID 1420230, 11 pages, 2016. https://doi.org/10.1155/2016/1420230
Obtaining Thickness Maps of Corneal Layers Using the Optimal Algorithm for Intracorneal Layer Segmentation
Abstract
Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in threedimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered on the center of pupil). To evaluate our approach, the automatic boundary results are compared with the boundaries segmented manually by two corneal specialists. The quantitative results show that GMM method segments the desired boundaries with the best accuracy.
1. Introduction
Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology today. It works noninvasively, has no contact, and provides cross sectional images from anterior and posterior segments of the eye. Imaging of the anterior segment is needed in refractive surgery and contact lens implantation [1].
Cornea was first imaged by OCT in 1994 [2] with similar wavelength of the light as retina (830 nm). A longer wavelength of 1310 nm with advantage of better penetration through sclera as well as realtime imaging at 8 frames per second was proposed in 2001 [3]. Specific systems for visualization of anterior eye (anterior segment OCT, ASOCT) were commercially available in 2011 [4] and there are three main producers for this device: SLOCT (Heidelberg Engineering), the Visante (Carl Zeiss Meditec, Inc.), and CASIA (Tomey, Tokyo, Japan). Furthermore, many devices based on Fourier domain OCT (FDOCT) obtain images from both anterior and posterior segments using the shorter wavelength of 830–870 nm. The main competitor device to OCT in AS imaging is ultrasound biomicroscopy (UBM). The UBM method can visualize some anatomical structures posterior to iris; on the other hand, good quality of the images in this method is really dependant on the operator’s skill [5].
The adult cornea is approximately 0.5 millimeter thick at the center and it gradually increases in thickness toward the periphery. The human cornea is comprised of five layers: Epithelium, Bowman’s membrane, stroma, Descemet’s membrane, and the Endothelium [5].
Some corneal diseases need to be diagnosed by precise evaluation of subcorneal layers. A good example for need of thickness mapping in subcorneal layers is diseases like Keratoconus. In this illness, the thickness of the Epithelium becomes altered to reduce corneal surface irregularity [6]. As a result, the presence of an irregular stroma cannot be diagnosed by observing the thickness map of the whole cornea. Therefore, analysis of epithelial and stromal thicknesses and shapes separately can improve the diagnosis [7, 8].
Several methods, like confocal microscopy, ultrasound, and OCT, have already been used to measure the corneal epithelial thickness. The average central Epithelium thickness is used in many studies [9–12]. Very highfrequency ultrasound is used to map the corneal Epithelium and stromal thickness [7]. The mentioned two methods had their drawbacks; namely, confocal microscopy is an invasive method and ultrasound method needs immersion of the eye in a coupling fluid [7, 13–16].
Accurate segmentation of corneal boundaries is necessary for production of correct thickness maps. An error of several micrometers can lead to wrong diagnosis. The large volume of these data in clinical evaluation makes manual segmentation time consuming and impractical [17–19].
Current methods for segmentation of cornea can be summarized as below.
Graglia et al. [20] proposed an approach for contour detection algorithm for finding Epithelium and Endothelium points and tracing the contour of the cornea pixel by pixel from these two points with a weight criterion. Eichel et al. [21] proposed a semiautomated segmentation algorithm for extraction of five corneal boundaries using a global optimization method. Li et al. [22–24] proposed an automatic method for corneal segmentation using a combination of fast active contour (FAC) and secondorder polynomial fitting algorithm. Eichel et al. [18, 25] proposed a semiautomatic method for corneal segmentation by utilizing Enhanced Intelligent Scissors (EIS) and user interaction. Shen et al. [26] used a thresholdbased technique which failed in segmentation of posterior surface of the cornea. In Williams et al.’s study [27], Level Set segmentation is investigated to obtain good results with low speed. LaRocca et al. [19] proposed an automatic algorithm to segment boundaries of three corneal layers using graph theory and dynamic programming. This method segments three clinically important corneal layer boundaries (Epithelium, Bowman, and Endothelium). Their results had good agreement with manual observers only for the central region of the cornea where the highest signal to noise ratio was found. A 3D approach is also proposed by Robles et al. [28] to segment three main corneal boundaries by graphbased method. In a more recent work by Williams et al. [29], a Graph Cut based segmentation technique is proposed to improve the speed and efficiency of the segmentation. Previous methods were never a perfect method for segmentation. Some of them suffered from low number of corneal images for testing, and others lacked precision in low contrast to noise ratios. Furthermore, none of previous works demonstrated a thickness map of corneal region for sublayers.
In this paper, we segment the boundaries of corneal layers by utilizing Graph Cut (GC), Gaussian Mixture Model (GMM), and Level Set (LS) methods. We evaluate the performance of segmenting Epithelium, Bowman, and Endothelium boundaries in OCT images using these segmentation methods. Finally, using the extracted boundaries of 3D corneal data, the 3D thickness maps of each layer are obtained.
The OCT images captured from hightech devices may have high SNR (Figure 1(a)), but in many cases, they have a low SNR (Figure 1(b)). Furthermore, some of OCT images may be affected by different types of artifact like central artifact (Figure 1(c)). The central artifact is the vertical saturation artifact that occurs around the center of the cornea due to the backreflections from the corneal apex, which saturates the spectrometer line camera [19].
(a)
(b)
(c)
2. Theory of Algorithms
In this section we explain the theory of GMM, GC, and LS algorithms.
2.1. GMM
As we explained in [30], GMM can be used for modeling of cornea layers in OCT images. For dimensional Gaussian random vector with mean vector and covariance matrix given belowa weighted mixture of Gaussian distribution would bewhere s are the weights of mixture components andParameters of GMM are calculated using iterative expectation maximization (EM) algorithm as follows (for training vectors):
2.2. GC
We use normalized cuts for segmentation which is explained by Shi and Malik [31]. This criterion measures both the total dissimilarity between the different groups and the total similarity within the groups. Suppose we have a graph which is composed of two distinct parts of and easily achieved by removing edges between these two sectors and has the following property: , . Based on the total weight of the edges that are removed, we can calculate the degree of dissimilarity between these two parts which is called cut: where is the edge weight between nodes and . Here the graph is divided into two subsections in such a way that the total weight of edges connecting these two parts is smaller than any other division. Instead of looking at the value of total edge weight connecting the two partitions, the cut cost as a fraction of the total edge connections will be computed to all the nodes in the graph which is called normalized cut (Ncut):where is the total connection from nodes in to all nodes in the graph and is similarly defined. Optimal cut is the cut which minimizes these criteria. In other words, minimizing the dissimilarity of the two subsections is equal to the maximizing of similarity within each subsection.
Suppose there is an optimal decomposition of a graph with vertices to the components and based on the criteria of the optimal cut. Consider the following generalized eigenvalue problem:where is an diagonal matrix with on its diagonal, is the total connection from node to all other nodes, and is an symmetrical matrix which contains the edge weights. We solve this equation for eigenvectors with the smallest eigenvalues. Then, the eigenvector with the second smallest eigenvalue is used to bipartition the graph. The divided parts, if necessary, will be divided again.
This algorithm has also been used in the processing of fundus images [32, 33].
2.3. LS
Li et al. [34] presented a LS method for segmentation in the presence of intensity inhomogeneities. Suppose that is the image domain and is a gray level image and also , . The image can be modeled aswhere is the true image, is the component that accounts for the intensity inhomogeneity, and is additive noise. For the local intensity clustering, consider a circular neighborhood with a radius centered at each point , defined by . The partition of induces a partition of the neighborhood . Therefore, the intensities in the set form the clusters, where is the image model. Now, we define a clustering criterion for classifying the intensities in . We need to jointly minimize for all in which is achievable by minimizing the integral of with respect to over the image domain . So, the energy formulation is as below:It is difficult to solve the expression to minimize the energy. Therefore, we express the energy as LS function. LS function is a function that takes positive and negative signs, which can be used to represent a partition of the domain . Suppose is a Level Set function. For example, for two disjoint regions,For the case of , two or more LS functions can be used to represent regions . Now, we formulate the expression as a Level Set function:where is a membership function. For example, for , , , and is the Heaviside function, is a constant in each subregion, and is the kernel function defined as a Gaussian function with standard deviation . This energy is used as the data term in the energy of the proposed variational LS formulation which is defined bywhere and are the regularization terms. By minimizing this energy, the segmentation results will be obtained. This is achieved by an iterative process. As a result, the LS function encompasses the desired region.
This algorithm has also been used in the processing of fundus images [35, 36].
3. Segmentation of Intracorneal Layers
The important corneal layers, that is, Epithelium, Bowman, and Endothelium, are shown in Figure 2.
For production of thickness maps, after preprocessing, the implementation method for segmenting the desired boundaries using GMM, GC, and LS is investigated and the best segmentation method is chosen. Finally, using the segmentation results of all Bscans the intracorneal thickness maps are produced.
3.1. Preprocessing
Duo to the noise of the images and their low contrast, a preprocessing stage is proposed. Then, the mentioned algorithms are applied for segmenting the boundaries of Epithelium, Bowman, and Endothelium layers.
3.1.1. Noise Reduction
The presence of noise in the OCT images causes errors in the final segmentation. To overcome this problem, we apply a lowpass filter via a Gaussian kernel to minimize the effect of noise. The kernel size of this filter which is going to be applied on OCT images and used as GMM inputs is [] with std of 10. These kernel sizes in LS and GC are [] (std of 20/3) and [] (std of 5/3), respectively. The selected kernel sizes lead to uniformity of the image noise and prevent oversegmentation. These values for the kernel sizes and stds are obtained based on trial and error to have best results. Figure 3 shows an example of denoising step.
(a)
(b)
3.1.2. Contrast Enhancement
OCT images usually have low contrast and we will be faced with a problem of segmenting the boundaries (especially the Bowman boundary). To enhance the contrast, we modified a method proposed by Esmaeili et al. [37] where each pixel of image is modified as follows: where , , and are, respectively, the mean, minimum, and maximum intensity values of the image within the square window around each pixel . Figure 4 shows an example of this process. As we can see, using this method nonuniform background is corrected on top of contrast enhancement.
(a)
(b)
3.1.3. Central Artifact
One of the most important artifacts in corneal OCT images is the central artifact that overcast corneal boundaries. So, reducing its effect causes the algorithm to be more successful. This artifact is nonuniform and we implement an algorithm that is robust to variations in width, intensity, and location of the central artifact. At first, to find abrupt changes in the average intensity, we accentuate the central artifact by median filtering the image with a [] kernel. Then, we break the image into three equal width regions and suppose that the second region contains the central artifact. In the next step, we plot the average intensity of columns of the middle region. After that we look for the column where the difference intensity between this column and its previous column is more than 6. Figure 5 shows an example of central artifact positioning.
3.2. Segmentation of Three Corneal Layer Boundaries with GMM
3.2.1. Segmenting the Boundaries of Epithelium and Endothelium Layers
To obtain these two boundaries, we first reduce the image noise using the method introduced in Section 3.1.1. Then we give the denoised image to the GMM algorithm. Actually we use univariate distribution for image intensities and 2 components where one of them models the background and the next one models the main information between Epithelium and Endothelium layers. Figure 6 shows the final result of GMMbased segmentation. We can see the approximate location of Epithelium and Endothelium boundaries which is obtained by choosing maximum responsibility factor in (4) and curve fitting to the boundaries. As it can be seen in Figure 6(c), the detected boundary is not exactly fitted to the border of Epithelium. So, the horizontal gradient of original image is calculated with help of the current boundary, choosing the lowest gradient in a small neighborhood (Figure 6).
3.2.2. Correction of Low SNR Regions
As we can see in Figure 6, the obtained Endothelium boundary in peripheral regions is not accurate because the SNR is low in these areas. So, extrapolation of central curve in these low SNR regions is proposed. By taking the second derivative of Endothelium boundary and searching around the middle of this vector the values below zero are found. It is observed that for low SNR regions there is a positive inflection in the second derivative. After finding the location of this positive inflection, by finding the coefficients of best polynomial of degree 4 fitted to correct estimated border a parabolic curve is obtained to estimate the Endothelium boundary in low SNR areas. To have a smooth enough curve for the final Endothelium boundary, a local regression using weighted linear least squares that assigns lower weight to outliers is used. Figure 7 shows an example of extrapolation of low SNR regions using this method.
(a)
(b)
3.2.3. Bowman Layer Segmentation
Since in most cases the Bowman boundary is very weak we first enhance this boundary employing the contrast enhancement method explained in Section 3.1.2. After enhancement, the horizontal edges are obtained by applying Sobel gradient. Figure 8 shows this procedure which can extract the Bowman boundary. Since we have obtained the Epithelium boundary (as described in Section 3.1), the Bowman boundary can be obtained by tracing Epithelium toward down to get a white to black change in brightness. However, unlike the middle of the Bowman boundary, the periphery may be corrupted by noise. To overcome this problem, similar to the proposed extrapolation method for low SNR areas in previous subsection, the outliers are corrected.
(a)
(b)
3.2.4. Interpolation into the Central Artifact Location
Central artifact causes the estimated boundary to be imprecise. So, we first find the artifact location with the method elaborated in Section 3.1.3 and then we make a linear interpolation on the estimated boundary in this location, as shown in Figure 9.
(a)
(b)
3.3. Segmentation of Three Corneal Layer Boundaries with LS
3.3.1. Segmenting the Boundaries of Epithelium and Endothelium Layers
To obtain these boundaries, we first denoise the images according to the method described in Section 3.1.1; then the LS algorithm is applied to the denoised image. The parameters are configured in this way: suppose is equal to 255; then, , , and . The final result of LS shows the approximate location of these two boundaries. Extrapolation and interpolation are performed according to what we explained for GMM (Figure 10).
(a)
(b)
3.3.2. Segmenting the Boundary of Bowman Layer
For this purpose, to have an enhanced Bowman boundary the presented contrast enhancement method in Section 3.1.2 is performed. In the next step, we apply LS to this image with and there is no change in other parameters. This value causes segmentation of much more details, in particular Bowman boundary. Figure 11(a) is the output of LS and using this image we can localize the Bowman boundary similar to what we did in GMM. Bowman boundary can be finally detected with the help of Epithelium boundary (Figure 11(b)).
(a)
(b)
3.4. Segmentation of Three Corneal Layer Boundaries with GC
3.4.1. Segmenting the Boundaries of Epithelium and Endothelium Layers
To obtain these boundaries, we first denoise the images according to the proposed method in Section 3.1.1. Then we apply GC considering the cutting area of 10. Figure 12(a) shows the output image and we can achieve the desired boundaries by looking for a change in intensity starting from the first row and the last one. The extrapolation to low SNR regions and interpolation for the central artifact are like what was performed for GMM.
(a)
(b)
3.4.2. Segmenting the Boundary of Bowman Layer
For this purpose, similar to other two algorithms, we first enhance the original image. In the next step, we flatten this image based on the obtained Epithelium boundary from the previous step and with respect to a horizontal line as shown in Figure 13(a). The flattening algorithm works by shifting the pixel positions which are under the Epithelium boundary to be fitted on a horizontal line (Figure 13(b)). The Bowman boundary can be tracked better in horizontal setup. Applying the algorithm to the entire flattened image is both time consuming and less reliable. So, the image is divided into three parts, and the algorithm is applied to the restricted area to the Bowman boundary and based on the Epithelium boundary, the desired boundary is achieved (Figure 14). The estimated boundary in the middle area is accurate and the linear extrapolation of this boundary to the image width is then performed as shown in Figure 15 (the estimated boundaries in the two other parts usually are not accurate).
(a)
(b)
(a)
(b)
(a)
(b)
4. ThreeDimensional Thickness Maps of Corneal Layers
In the previous section, 3 methods for intracorneal layer segmentation were explained. In this section these methods are compared and the best one is picked to produce the intracorneal thickness maps by employing the segmentation results of all Bscans.
4.1. Comparing Segmentation Results of GMM, LS, and GC Methods
In this study, we used corneal OCT images taken from 15 normal subjects. Each 3D OCT includes 40 Bscans of the whole cornea. The images were taken from the Heidelberg OCTSpectralis HRA imaging system in NOOR ophthalmology center, Tehran. To evaluate the robustness and accuracy of the proposed algorithms, we use manual segmentation by two corneal specialists. For this purpose, 20 images were selected randomly from all subjects. We calculated the unsigned and signed error of each algorithm against manual results.
The boundaries were segmented automatically using a MATLAB (R2011a) implementation of our algorithm. A computer with Microsoft Windows 7 x32 edition, intel core i5 CPU at 2.5 GHZ, 6 GB RAM, was used for the processing. The average computation times for GMM, LS, and GC were 7.99, 19.38, and 17.022 seconds per image, respectively. The mean and std of unsigned and signed error of the mentioned algorithms were calculated and are shown in Tables 1–6. The mean and std of unsigned error are defined as follows:where and are the mean and std of unsigned error between manual and automatic methods, is the number of points to calculate the layer error (width of the image), and and are the boundary layers which are obtained manually and automatically, respectively. The mean value of manual segmentation by two independent corneal specialists is considered as in the above equations and the interobserver errors are also provided in Tables 1–6. The direct comparison of these values with the reported errors shows that the performance of GMM algorithm is more acceptable in comparison with manual segmentation.






With study of the presented results such as the lower segmentation error for each of the layer boundaries and time spent for segmenting the borders, it is observed that GMM method compared to the position differences between two expert manual graders (Table 7) give better results compared with two other methods.

4.2. Producing Intracorneal Thickness Maps
According to what is described above, GMM was selected as the optimal method for segmentation of corneal layers. Therefore, using GMM method, the corneal layer boundaries in the volumetric images of each subject are segmented. The segmented values are then interpolated (as described below) to create the thickness maps. To illustrate the layer thickness, we first subtract the boundaries which create a layer and an interpolation is performed between two consecutive volumetric images. The number of pixels between two successive images for interpolation is calculated as shown in Figure 16. Considering pixel values of and , the equation between and and scale signs in right and left images, the value pixels which correspond to of can be found. This value () should be used in interpolation and the number of interpolation pixels is equal to . It can be shown that the OCT slices cover an area of 6 millimeters in width and height. Figure 17 shows the 3D thickness maps of corneal layers of a normal subject.
(a)
(b)
(c)
Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (cantered on the centre of pupil). The zones are defined in concentric circles with diameters equal to 2 and 5 millimeters (Figure 17(c)). The thickness averages (±population SD) are shown in Table 8.

5. Conclusion
In this paper we compared three different segmentation methods for segmentation of three important corneal layers of normal eyes using OCT devices. According to good performance of the GMM method in this application, we chose GMM as the optimal method to calculate the boundaries. The proposed method is able to eliminate the artifacts and is capable of automatic segmentation of three corneal boundaries.
In the next step, we obtained the thickness maps of corneal layers by interpolating the layer information. Mean and standard deviation of the thickness values in epithelial, stromal, and whole cornea are calculated in 3 central, superior, and inferior zones (cantered on the centre of pupil). To the best of our knowledge, this is the first work to find the thickness maps from a set of parallel Bscans and all of the previous methods in construction of thickness map from OCT data use pachymetry scan pattern [13, 14, 16].
Competing Interests
The authors declare that they have no competing interests.
References
 V. Hurmeric, S. H. Yoo, and F. M. Mutlu, “Optical coherence tomography in cornea and refractive surgery,” Expert Review of Ophthalmology, vol. 7, no. 3, pp. 241–250, 2012. View at: Publisher Site  Google Scholar
 J. A. Izatt, M. R. Hee, E. A. Swanson et al., “Micrometerscale resolution imaging of the anterior eye in vivo with optical coherence tomography,” Archives of Ophthalmology, vol. 112, no. 12, pp. 1584–1589, 1994. View at: Publisher Site  Google Scholar
 S. Radhakrishnan, A. M. Rollins, J. E. Roth et al., “Realtime optical coherence tomography of the anterior segment at 1310 nm,” Archives of Ophthalmology, vol. 119, no. 8, pp. 1179–1185, 2001. View at: Publisher Site  Google Scholar
 http://eyewiki.aao.org/Anterior_Segment_Optical_Coherence_Tomography#Technology.
 https://nei.nih.gov/health/cornealdisease.
 D. Z. Reinstein, M. Gobbe, T. J. Archer, R. H. Silverman, and J. Coleman, “Epithelial, stromal, and total corneal thickness in keratoconus: threedimensional display with artemis veryhigh frequency digital ultrasound,” Journal of Refractive Surgery, vol. 26, no. 4, pp. 259–271, 2010. View at: Publisher Site  Google Scholar
 Y. Li, O. Tan, R. Brass, J. L. Weiss, and D. Huang, “Corneal epithelial thickness mapping by fourierdomain optical coherence tomography in normal and keratoconic eyes,” Ophthalmology, vol. 119, no. 12, pp. 2425–2433, 2012. View at: Publisher Site  Google Scholar
 C. Temstet, O. Sandali, N. Bouheraoua et al., “Corneal epithelial thickness mapping by fourierdomain optical coherence tomography in normal, keratoconic, and forme fruste keratoconus eyes,” Investigative Ophthalmology & Visual Science, vol. 55, pp. 2455–2455, 2014. View at: Google Scholar
 H. F. Li, W. M. Petroll, T. MøllerPedersen, J. K. Maurer, H. D. Cavanagh, and J. V. Jester, “Epithelial and corneal thickness measurements by in vivo confocal microscopy through focusing (CMTF),” Current Eye Research, vol. 16, no. 3, pp. 214–221, 1997. View at: Publisher Site  Google Scholar
 J. C. Erie, S. V. Patel, J. W. McLaren et al., “Effect of myopic laser in situ keratomileusis on epithelial and stromal thickness: a confocal microscopy study,” Ophthalmology, vol. 109, no. 8, pp. 1447–1452, 2002. View at: Publisher Site  Google Scholar
 S. Sin and T. L. Simpson, “The repeatability of corneal and corneal epithelial thickness measurements using optical coherence tomography,” Optometry & Vision Science, vol. 83, no. 6, pp. 360–365, 2006. View at: Publisher Site  Google Scholar
 J. Wang, D. Fonn, T. L. Simpson, and L. Jones, “The measurement of corneal epithelial thickness in response to hypoxia using optical coherence tomography,” American Journal of Ophthalmology, vol. 133, no. 3, pp. 315–319, 2002. View at: Publisher Site  Google Scholar
 S. Haque, L. Jones, and T. Simpson, “Thickness mapping of the cornea and epithelium using optical coherence tomography,” Optometry & Vision Science, vol. 85, no. 10, pp. E963–E976, 2008. View at: Publisher Site  Google Scholar
 Y. Li, O. Tan, and D. Huang, “Normal and keratoconic corneal epithelial thickness mapping using Fourierdomain optical coherence tomography,” in Medical Imaging 2011: Biomedical Applications in Molecular, Structural, and Functional Imaging, 796508, vol. 7965, p. 6, March 2011. View at: Google Scholar
 D. Alberto and R. Garello, “Corneal sublayers thickness estimation obtained by highresolution FDOCT,” International Journal of Biomedical Imaging, vol. 2013, Article ID 989624, 7 pages, 2013. View at: Publisher Site  Google Scholar
 W. Zhou and A. Stojanovic, “Comparison of corneal epithelial and stromal thickness distributions between eyes with keratoconus and healthy eyes with corneal astigmatism $\ge $ 2.0 D,” PLoS ONE, vol. 9, no. 1, Article ID e85994, 2014. View at: Publisher Site  Google Scholar
 D. W. DelMonte and T. Kim, “Anatomy and physiology of the cornea,” Journal of Cataract and Refractive Surgery, vol. 37, no. 3, pp. 588–598, 2011. View at: Publisher Site  Google Scholar
 J. A. Eichel, A. K. Mishra, D. A. Clausi, P. W. Fieguth, and K. K. Bizheva, “A novel algorithm for extraction of the layers of the cornea,” in Proceedings of the Canadian Conference on Computer and Robot Vision (CRV '09), pp. 313–320, Kelowna, Canada, May 2009. View at: Publisher Site  Google Scholar
 F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomedical Optics Express, vol. 2, no. 6, pp. 1524–1538, 2011. View at: Publisher Site  Google Scholar
 F. Graglia, J.L. Mari, G. Baikoff, and J. Sequeira, “Contour detection of the cornea from OCT radial images,” in Proceedings of the 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5612–5615, Lyon, France, August 2007. View at: Publisher Site  Google Scholar
 J. A. Eichel, A. K. Mishra, P. W. Fieguth, D. A. Clausi, and K. K. Bizheva, “A novel algorithm for extraction of the layers of the cornea,” in Proceedings of the Canadian Conference on Computer and Robot Vision (CRV '09), pp. 313–320, IEEE, 2009. View at: Google Scholar
 Y. Li, M. V. Netto, R. Shekhar, R. R. Krueger, and D. Huang, “A longitudinal study of LASIK flap and stromal thickness with highspeed optical coherence tomography,” Ophthalmology, vol. 114, no. 6, pp. 1124–1132.e1, 2007. View at: Publisher Site  Google Scholar
 Y. Li, R. Shekhar, and D. Huang, “Segmentation of 830and 1310nm LASIK corneal optical coherence tomography images [468418],” in Proceedings of the International Society for Optical Engineering, vol. 4684 of Proceedings of SPIE, pp. 167–178, San Diego, Calif, USA, 2002. View at: Google Scholar
 Y. Li, R. Shekhar, and D. Huang, “Corneal pachymetry mapping with highspeed optical coherence tomography,” Ophthalmology, vol. 113, no. 5, pp. 792–799.e2, 2006. View at: Publisher Site  Google Scholar
 N. Hutchings, T. L. Simpson, C. Hyun et al., “Swelling of the human cornea revealed by highspeed, ultrahighresolution optical coherence tomography,” Investigative Ophthalmology & Visual Science, vol. 51, no. 9, pp. 4579–4584, 2010. View at: Publisher Site  Google Scholar
 M. Shen, L. Cui, M. Li, D. Zhu, M. R. Wang, and J. Wang, “Extended scan depth optical coherence tomography for evaluating ocular surface shape,” Journal of Biomedical Optics, vol. 16, no. 5, Article ID 056007, 10 pages, 2011. View at: Publisher Site  Google Scholar
 D. Williams, Y. Zheng, F. Bao, and A. Elsheikh, “Automatic segmentation of anterior segment optical coherence tomography images,” Journal of Biomedical Optics, vol. 18, no. 5, Article ID 056003, 2013. View at: Publisher Site  Google Scholar
 V. A. Robles, B. J. Antony, D. R. Koehn, M. G. Anderson, and M. K. Garvin, “3D graphbased automated segmentation of corneal layers in anteriorsegment optical coherence tomography images of mice,” in Medical Imaging: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 9038 of Proceedings of SPIE, pp. 1–7, March 2014. View at: Publisher Site  Google Scholar
 D. Williams, Y. Zheng, F. Bao, and A. Elsheikh, “Fast segmentation of anterior segment optical coherence tomography images using graph cut,” Eye and Vision, vol. 2, p. 1, 2015. View at: Publisher Site  Google Scholar
 M. K. Jahromi, R. Kafieh, H. Rabbani et al., “An automatic algorithm for segmentation of the boundaries of corneal layers in optical coherence tomography images using gaussian mixture model,” Journal of Medical Signals and Sensors, vol. 4, article 171, 2014. View at: Google Scholar
 J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000. View at: Publisher Site  Google Scholar
 X. Chen, M. Niemeijer, L. Zhang, K. Lee, M. D. Abramoff, and M. Sonka, “Threedimensional segmentation of fluidassociated abnormalities in retinal OCT: probability constrained graphsearchgraphcut,” IEEE Transactions on Medical Imaging, vol. 31, no. 8, pp. 1521–1531, 2012. View at: Publisher Site  Google Scholar
 Z. Hu, M. Niemeijer, K. Lee, M. D. Abràmoff, M. Sonka, and M. K. Garvin, “Automated segmentation of the optic disc margin in 3D optical coherence tomography images using a graphtheoretic approach,” in Medical Imaging: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 7262 of Proceedings of SPIE, pp. 1–11, February 2009. View at: Publisher Site  Google Scholar
 C. Li, R. Huang, Z. Ding, J. Gatenby, D. N. Metaxas, and J. C. Gore, “A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 2007–2016, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 D. W. K. Wong, J. Liu, J. H. Lim et al., “Levelset based automatic cuptodisc ratio determination using retinal fundus images in ARGALI,” in Proceedings of the 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS '08), pp. 2266–2269, Vancouver, Canada, August 2008. View at: Google Scholar
 A. Yazdanpanah, G. Hamarneh, B. R. Smith, and M. V. Sarunic, “Segmentation of intraretinal layers from optical coherence tomography images using an active contour approach,” IEEE Transactions on Medical Imaging, vol. 30, no. 2, pp. 484–496, 2011. View at: Publisher Site  Google Scholar
 M. Esmaeili, H. Rabbani, A. M. Dehnavi, and A. Dehghani, “Automatic detection of exudates and optic disk in retinal images using curvelet transform,” IET Image Processing, vol. 6, no. 7, pp. 1005–1013, 2012. View at: Publisher Site  Google Scholar  MathSciNet
Copyright
Copyright © 2016 Hossein Rabbani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.