Research Article  Open Access
Hajar Danesh, Raheleh Kafieh, Hossein Rabbani, Fedra Hajizadeh, "Segmentation of Choroidal Boundary in Enhanced Depth Imaging OCTs Using a Multiresolution Texture Based Modeling in Graph Cuts", Computational and Mathematical Methods in Medicine, vol. 2014, Article ID 479268, 9 pages, 2014. https://doi.org/10.1155/2014/479268
Segmentation of Choroidal Boundary in Enhanced Depth Imaging OCTs Using a Multiresolution Texture Based Modeling in Graph Cuts
Abstract
The introduction of enhanced depth imaging optical coherence tomography (EDIOCT) has provided the advantage of in vivo crosssectional imaging of the choroid, similar to the retina, with standard commercially available spectral domain (SD) OCT machines. A texturebased algorithm is introduced in this paper for fully automatic segmentation of choroidal images obtained from an EDI system of Heidelberg 3D OCT Spectralis. Dynamic programming is utilized to determine the location of the retinal pigment epithelium (RPE). Bruch’s membrane (BM) (the bloodretina barrier which separates the RPE cells of the retina from the choroid) can be segmented by searching for the pixels with the biggest gradient value below the RPE. Furthermore, a novel method is proposed to segment the choroidsclera interface (CSI), which employs the wavelet based features to construct a Gaussian mixture model (GMM). The model is then used in a graph cut for segmentation of the choroidal boundary. The proposed algorithm is tested on 100 EDI OCTs and is compared with manual segmentation. The results showed an unsigned error of 2.48 ± 0.32 pixels for BM extraction and 9.79 ± 3.29 pixels for choroid detection. It implies significant improvement of the proposed method over other approaches like means and graph cut methods.
1. Introduction
Optical coherence tomography (OCT) imaging technique was introduced by Huang et al. in 1991 [1]. This technique is employed for its ability in taking crosssectional images from microscopic structure of the living tissues. The device has a resolution on the scale of micrometers. In this technique, a number of Ascans or linear scans create Bscan or crosssectional images [2]. OCT images consist of a great amount of data; therefore, nonautomated and visual analysis of such a great amount of data would be troublesome for the ophthalmologist. The main goal of automatic segmentation is to assist the ophthalmologist in diagnosis and monitoring of eye diseases.
Choroid is one of the structural layers located between the sclera and the retina. This pigmented layer contains many capillaries that supply feeding of the iris and retinal light receptor cells [3]. Many diseases such as polypoidal choroidal vasculopathy and choroidal tumors cause changes in the structure of this layer [4–8]; therefore, segmentation of this layer has great importance for ophthalmologists. In retinal imaging using conventional OCT, wavelength of the light source is around 800 nanometers, which is not appropriate for imaging of choroid layer, due to signal transmission difficulty through retinal pigment epithelium (RPE) layer and increased depth of imaging [9]. However, increased pixel density and high signaltonoise ratio in SDOCT (spectraldomain OCT) in comparison to TDOCT (timedomain OCT) makes choroidal imaging possible. In SDOCT, the structures that are closer to “zero delay line” have higher signals than those that are farther away. In conventional OCT, the “zero delay line” is near the inner surface of retina; however, in EDIOCT, it is placed near the outer retina and choroid, which is the key for EDI imaging. In order to increase the quality of EDIOCT and to reduce “speckles,” a high number of images (usually 100 images) are averaged through a software that provides highquality images with smoother border [9]. An example of this imaging method is compared with conventional OCT in Figure 1. However, another solution for choroidal imaging is a higher wavelength of approximately 1060 nanometer [10, 11].
(a)
(b)
In several studies, EDIOCT has been used to measure the thickness of the choroid, finding its relation with diseases, and monitoring the treatment process [12–14]. In most of these studies, measurement of choroidal thickness is usually accomplished by manual labeling which is a timeconsuming and tedious process. This problem is much more complicated when the number of images is numerous. Therefore, the need for development of an automatic segmentation algorithm on EDIOCT arises.
A limited number of studies have already been conducted about automatic segmentation of these images. Kajić et al. [15–17] proposed a twostage statistical model to detect the choroid boundaries in the 1060 nm OCT images in healthy and pathological eyes. This model needed extensive training and the mean error is 13%. Tian et al. [18, 19] found the choroidal boundary by finding the shortest path of the graph formed by valley pixels using dynamic programming (DP). The average of Dice’s coefficient on 45 EDIOCT images was 90.5%. Lu et al. [20] proposed a technique to segment the inner boundary of the choroid using a twostage fast active contour model. Then a realtime humansupervised automated segmentation on the outer boundary of the choroid was applied. The reported Dice similarity coefficient value on 30 images captured from patients diagnosed with diabetes was 92.7%.
Many algorithms based on wavelet and graph cut are already used in segmentation of retinal layers [21]. However, due to heterogeneity in the choroid layers, such methods cannot be useful in choroid segmentation. However, because of distinct difference between texture of choroid and other retinal layers, an algorithm based on texture classification can be effective. With this idea, we use a combination of graphbased methods and waveletdomain features for choroidal segmentation. A description of our method including detection of Bruch’s membrane (BM) and segmentation of choroidalscleral interface (CSI) is provided in Section 2 and the results are presented in Section 3. The results show the improved robustness of the method compared to other available algorithms. Finally the conclusion and suggestions for future works are presented in Section 4.
2. Material and Methods
The proposed choroid segmentation method is tested on 100 randomly selected twodimensional EDIOCT images obtained from 10 eyes of 6 normal subjects by an EDI system of multimodality diagnostic imaging (wavelength: 870 nm; scan pattern: enhanced depth imaging; Spectralis HRA + OCT; Heidelberg Engineering, Heidelberg, Germany) [22]. The study protocol was reviewed in advance by the review board of Noor Ophthalmology Research Center. Each participant was informed of the purpose of the study and provided a written consent to participate. For evaluation and ruling out any ocular pathology all patients underwent thorough ophthalmic examinations including refraction, visual acuity, slit lamp biomicroscopic examination, IOP measurement by Goldmann applanation tonometer, and examination of the fundus with plus 90D lens. Each dataset consisted of a SLO image and limited number of OCT scans with size of 496 × 768 (namely, for a data with 31 selected OCT slices, the whole data size would be 496 × 768 × 31). The pixel resolutions were 3.9 μm/pixel in axial direction and 14 μm/pixel in transversal direction.
2.1. Overview of Algorithm
In the first step, dynamic programming (DP) was applied for boundary detection of RPE layer. The BM is the bloodretina barrier that separates the RPE cells of the retina from the choroid and can be segmented by searching for the pixels with the biggest gradient value below the RPE. In this stage, we eliminated the pixels above this boundary form our calculations by making them equal to zero and focused on segmentation of the lower boundary of the choroid.
In the next step, discrete wavelet transform (DWT)—using filter bank and descriptors—was employed to extract appropriate features from EDIOCT images. The extracted features from training images were used to construct a Gaussian mixture model (GMM). It is assumed that each extracted descriptor using DWT has a multivariate normal probability density function (pdf) and the number of Gaussian distributions () is chosen to be 3, which represent the area above the BM, choroid, and the area beneath the CSI. Finally, the relevance of each data to the model is used in creating the graph for final CSI segmentation. A block diagram of the algorithm could be found in Figure 2.
2.2. BM Detection
We applied boundary detection algorithm using DP to detect boundary of RPE layer which is the brightest part in EDI OCT images. DP is a method based on optimality [23], which seeks for the best functions in which the variables are not simultaneously in conjunction with each other. In simple problem of boundary tracking, the goal is to find the best path (least expensive) between one of the possible starting points and one of the possible ending points. In this step, upper bound of choroid (lower bound of RPE) can be segmented.
For boundary detection using DP, we are looking for a dark boundary transmitted from one edge to another edge in an image (e.g., from column 1 to in Figure 3). DP algorithm associates a cost to each node (brightness of each pixel) and the partial paths are set according to the cost (brightness) of the next neighboring node. The algorithm starts through the columns (column 1 to ) and for each node (from 1 to ), the best path with optimized (lowest) cost is selected among three neighbor pixels of the neighboring column ( in (1)). Therefore, a cumulative cost () can be made for a pixel in + 1th column and in the th node, according to the cost of previously selected nodes (1).
The following algorithm describes the DP method [24].(1)Specify initial costs of all nodes in the first graph layer, and partial path costs , .(2)Repeat step 3 for all .(3)Repeat step 4 for all nodes in the graph layer .(4)Let Set pointer from node back to node , where denotes the optimal predecessor. Optimality is defined as minimization of the cost function.(5)Find an optimal node in the last graph layer and obtain an optimal path by backtracking through the points from to .
The positions of a sample node and its neighbors in the previous (th) graph layer are shown in Figures 3(a) and 3(b). The highlighted pixels around indicate in (1).
It should also be noted that the DP boundary detection is conventionally looking for the darkest boundary in an image and for our application (detection of the brightest boundary) we should inverse the brightness values of the image. After segmentation of RPE layer, BM can be located by searching for the pixels with the biggest gradient below the RPE. Figure 4 shows the result of BM segmentation for a sample EDIOCT image. From calculative point of view, DP is efficient and flexible and can be considered as a powerful tool even in presence of noise.
(a)
(b)
2.3. CSI Detection
A novel combination of algorithms is proposed for discrimination of lower boundary of choroid (CSI). Among many possible texture descriptors used for segmentation of images with several textures [24], we show the ability of wavelet texture descriptors in this work.
The algorithm can then be stated in three steps:(1)calculate wavelet descriptors for each pixel (Section 2.3.1);(2)create a model of the classes using training images. (Section 2.3.2);(3)take a new image and segment it using the learned model. Use graph cut segmentation to obtain spatially coherent segmentation (Section 2.3.3).
It is assumed that extracted descriptors using DWT have a multivariate normal pdf. This assumption can be justified by the central limit theorem (CLT). In its classical form, the CLT states that the mean of independent identically distributed random variables, with finite mean and variance, has a limiting distribution for that is Gaussian. As an approximation, for a finite number of observations , it provides a reasonable approximation only close to the peak of the normal distribution rather than stretching into the tails (convergence to the limit).
In our datasets of Heidelberg 3D OCT Spectralis, each image is calculated from the mean value of 100 images taken at the same time. The mean process dramatically improves the quality of the information in the images, without introducing an alteration to the original image or adding any distracting electronic noise [22]. Therefore, according to the “convergence to the limit” theorem near the peak areas (needed for GMM assumption), and in accord with linear characteristics of the wavelet transform, we can assume that each extracted descriptor using DWT has a multivariate normal pdf.
2.3.1. Wavelet Descriptors
DWT using filter bank and descriptors is employed to extract features for each pixel of the image. In other applications of wavelets transform for texture analysis, most prevalent features are wavelet energy signatures and their secondorder statistics [25, 26]. In this case, Haar wavelet filters and are utilized for ease of use. Filter bank stages are repeated (multiresolution approach) for four levels. Since we are using wavelet frame instead of the standard DWT, the number of levels can optionally grow. But, with trial and error, we found that, for levels higher than 4, the results did not improve and we only faced with more complexity. The energy of each highpass filter at different stages of subbands and the energy of the last stage of the lowpass filter are extracted as image features (5 features). Extracted features and a training image with a known segmentation (mask) are then used to construct GMM for the image.
2.3.2. Gaussian Mixture Model
GMM is a parametric pdf represented as a weighted sum of Gaussian component distributions. The idea for clustering with GMMs is the same as for means. In conventional GMM, we make an initial guess for mean and covariance and then try to iteratively refine them. The number of Gaussian distributions () is chosen to be 3, which represent the area above the BM, choroid, and the area beneath CSI.
Assume that dimensional data constituted Gaussian distribution, , in which with the mean and covariance is considered as follows:
Considering the weights of each relevant as, (), the final pdf is defined as follows: in which
Since we are working on wavelet features described in Section 2.3.1, should be set to 5. We now assume that actual values of the observed EDIOCT images and value of pixels in each image are independent. We also suppose that EDI images have equal size of . With defining , we will have the following equations: where represents the number of EDIOCT images used for training (10 for our case). For each training image, a mask was made manually which classifies each image into three labels: the area above the BM, choroid, and the area beneath CSI.
In conventional GMM, the parameters are estimated using iterative EM algorithm to fit a mixture model to a set of available data [24, 27].
EM algorithm is rapidly repeated and at each stage, Gaussian expectation is estimated for each data sample, and then Gaussian element estimation (maximization) is altered. If and are in hand as an estimation, we can calculate the th Gaussian probability for :
in which (probability ratio for ) is granted based on sum of (regardless of Gaussian production) balanced with . Therefore, the following equation is defined:
In this equation, mean is calculated in data series. Similarly, one can estimate corrected values of and
With a slight modification to conventional GMM, which uses EM algorithm to calculate the parameters of each Gaussian function (including mean and covariance in (8)), we use the training step for finding the parameters of GMM. Namely, we use training images with known segmentation (mask) and calculate the mean and covariance of each section (for the training data). These parameters are used for all images in our database (i.e., there is no need to use EM algorithm for each image separately, which speeds up the algorithm). Then we only calculate the responsibility factor by (6) to construct a learned model to be fed to graph cut segmentation.
When a new test image is considered for segmentation, we calculate wavelet descriptors for each pixel (and accordingly, 5dimensional vectors of and ). The probability () of a pixel belonging to a particular class of the learned model can be obtained, afterward. The value of probability () is then used in the construction of a graph cut segmentation.
2.3.3. Graph Cut Segmentation
Graph cut segmentation is constructed based on a learned model to obtain spatially coherent segmentation.
The direct use of optimization algorithms of minimumcut/maximum flow for graph partitioning was first introduced by Greig et al. [28] in image processing of binary images. Using graph optimization algorithms, a powerful method of optimal bordering and region classification in dimensional image data was proposed [29, 30].
This method starts by one or more points representing “object” and one or more points representing “background,” determined using interactive or automatic identification. The overall shape of cost function is represented as follows [31]:
To minimize , a special class of arcweighted graphs is employed. In addition to the set of nodes corresponding to pixels of the image (Figure 5(a)), terminal nodes (shown by 1, 2, and 3 in Figure 5(b)) are also added to . These terminals are hardlinked with the segmentation seed points (bold links in Figure 5(b)) and represent the segmentation labels (1, 2, and 3).
(a)
(b)
The arcs in can be classified into two categories: links and links. The links connect pairs of neighboring pixels whose costs are derived from the smoothness term . The links connect pixels whose costs are derived from the data term . Green lines in Figure 5(b) show links, while red, yellow, and blue curves represent the links. A cut in is a set of arcs whose removal partitions the nodes into three disjoint subsets (1, 2, and 3 in Figure 5). The cost of a cut is the total cost of arcs in the cut, and a minimum cut is a cut whose cost is minimal. The minimum cut problem or its dual (the maximum flow problem) can be solved by various algorithms. In maximum flow algorithm, maximum amount of water to sink is delivered using directed arc graphs and the amount of water flow through each separate arc is determined using arc capacity or cost. The greatest amount of maximum flow from to saturates a set of graph arcs. These saturated arcs divide nodes into two separate sections of and , related to minimum cuts [24, 32, 33].
Let each image pixel take a binary label . The labeling vector defines the resulting binary segmentation.
In this paper, is the distance between each image pixel and initial class made by GMM in Section 2.3.2. For a test image, wavelet descriptors are calculated for each pixel and probability () of a pixel belonging to a particular class of the learned model is obtained. Then is created by is a matrix of costs which is related to adjacent pixel values. In this work, we assigned a square matrix with fixed values () and with zeros on diagonal as (9). The increase in fixed term () would enhance adjacency constraint constant and therefore would result in a finer separation:
3. Results
One hundred twodimensional EDIOCT images were obtained from Heidelberg 3D OCT Spectralis and were used in statistical analysis to produce the results. For our 100 twodimensional dataset, we chose 10 images to train the learned model. Actually, we can choose any 10 or less images for training and calculate means and covariances and weights of GMM and there would be no considerable difference between these parameters. This step will expedite our algorithm instead of using EM algorithm for each image. The performance of the method is reported based on its ability in correct segmentation of the test images. For evaluation of the proposed method, the manual segmentation of two observers was used as the gold standard.
For validation purpose, the mean signed and unsigned border positioning errors for each border were compared with other algorithms such as graph cut, means, and DP and the results are presented in Tables 1 and 2 for each boundary. We implemented each of these algorithms and tested them on our 90 twodimensional test set. In means algorithm, we selected and applied means algorithm on the image, directly. In graph cut method, we used the result of means algorithm to create initial prototype for each class and was calculated by the distance between each image pixel to initial class made by means algorithm. was calculated in the same method described in Section 2.3.3 by small change of selecting which could give the best results. For DP method, the first step is similar to the proposed method in this paper (we applied boundary detection algorithm using dynamic programming to detect RPE layer boundary and eliminated the pixels above this boundary by making them equal to zero). Then we eliminated a region beneath the RPE (7 pixels below) and applied DP to search the image for another time. The results were based on the whitest route available after elimination of RPE layer.


According to Tables 1 and 2, the signed border positioning errors were 0.44 ± 1.18 pixels for BM extraction and 5.77 ± 2.77 pixels for choroid segmentation, and the unsigned border positioning errors were 2.48 ± 0.32 pixels for BM extraction and 9.79 ± 3.29 pixels for choroid segmentation, respectively. The errors between the proposed algorithm and the reference standard were similar to those computed between the observers. For example, the overall observer error was 2.64 ± 0.98 pixels and 8.78 ± 2.38 pixels for BM and CSI, respectively, which is comparable to the results of the algorithm. The border positioning errors of the proposed method showed significant improvement over other algorithms, compared in both of the tables.
To show the statistically significant improvement of the proposed method over the compared algorithms, Table 3 shows the obtained values. The values confirm that our results have a significant improvement. For instance, the algorithm’s overall value against means and graph cut was less than 0.001 and against dynamic programming, it was less than 0.01.

The pixel resolution of our datasets in axial direction was 3.9 μm/pixel. Therefore, mean signed positioning errors for localization of BM and choroid are 1.71 and 22.50 μm, respectively. In accord with repeatability measurements for choroidal thickness of EDIOCT images [12], it can be concluded that the positioning error has acceptable accuracy. Figure 6 demonstrates two samples showing the performance of the proposed method. Furthermore, Figure 7 shows the results of segmentation using the proposed algorithm in comparison with other methods.
(a)
(b)
(c)
(d)
(a)
(b)
(c)
(d)
(e)
(f)
The computational complexity of the proposed algorithm is around 13 seconds in the case of using the saved parameters for constructed models; otherwise, if the computation consists of the model construction for one image, it takes around 21 seconds on a PC with Microsoft Windows XP x32 edition, Intel core 2 Duo CPU at 3.00 GHz, 4 GB RAM.
4. Conclusion
In this paper, a new method for choroid segmentation in EDIOCT is introduced. In the first step, RPE boundary was segmented using DP method and the area above RPE was eliminated from next steps. Then, the wavelet descriptors were calculated for each pixel of the training images and assuming that the pdf of these descriptors are normal, their parameters were calculated to construct a model.
When a new test image is considered for segmentation, wavelet descriptors were calculated for each pixel and probability () of a pixel belonging to a particular class of the learned model was obtained. The value of probability () was then used in the construction of a graph cut segmentation.
The main limitation of this method is the need for manual segmentation to construct the model. Despite the fact that only a few training images are sufficient to produce good results, the manual labeling may be troublesome and replacing this step with an automatic method can be studied in future works. For example, in this work we choose several images as known data and extracted their mixture model’s parameters for using as parameters of all new data in the database. For each database, arbitrary sample data can be used for constructing this mixture model (with known parameters) and then used for all data in database. However, the automatic version of this method can also been considered without using a predefined mixture model and by using EM algorithm for each image.
The new method has better accuracy than previous methods and the algorithm is tested on a larger dataset compared to older algorithms. The performance of the method is also fast and the implementation is relatively simple in comparison to more complicated algorithms.
As a future work, the proposed method should be tested on EDI images taken from other modalities to prove its robustness to the imaging technique. Furthermore, a 3D choroidal thickness map can be constructed using 3D OCT dataset which can assist the ophthalmologist in the diagnosis of the choroidal diseases. We also like to work on segmentation of the blood vessels in choroidal layer and produce a 3D vascular structure to give more information about distribution of the vessels.
Conflict of Interests
The authors disclose that there is no conflict of interests including any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work.
References
 D. Huang, E. A. Swanson, C. P. Lin et al., “Optical coherence tomography,” Science, vol. 254, no. 5035, pp. 1178–1181, 1991. View at: Google Scholar
 D. Huang, E. A. Swanson, C. P. Lin et al., Optical Coherence Tomography, Massachusetts Institute of Technology, Whitaker College of Health Sciences and Technology, 1993.
 G. A. Cioffi, E. Granstam, and A. Alm, “Ocular circulation,” in Adler's Physiology of the Eye, Mosby, St. Louis, Mo, USA, 2003. View at: Google Scholar
 G. Barteselli, J. Chhablani, S. ElEmam et al., “Choroidal volume variations with age, axial length, and sex in healthy subjects: a threedimensional analysis,” Ophthalmology, vol. 119, no. 12, pp. 2572–2578, 2012. View at: Publisher Site  Google Scholar
 S. E. Chung, S. W. Kang, J. H. Lee, and Y. T. Kim, “Choroidal thickness in polypoidal choroidal vasculopathy and exudative agerelated macular degeneration,” Ophthalmology, vol. 118, no. 5, pp. 840–845, 2011. View at: Publisher Site  Google Scholar
 P. Jirarattanasopa, S. Ooto, A. Tsujikawa et al., “Assessment of macular choroidal thickness by optical coherence tomography and angiographic changes in central serous chorioretinopathy,” Ophthalmology, vol. 119, no. 8, pp. 1666–1678, 2012. View at: Publisher Site  Google Scholar
 J. R. Ehrlich, J. Peterson, G. Parlitsis, K. Y. Kay, S. Kiss, and N. M. Radcliffe, “Peripapillary choroidal thickness in glaucoma measured with optical coherence tomography,” Experimental Eye Research, vol. 92, no. 3, pp. 189–194, 2011. View at: Publisher Site  Google Scholar
 J. Yeoh, W. Rahman, F. Chen et al., “Choroidal imaging in inherited retinal disease using the technique of enhanced depth imaging optical coherence tomography,” Graefe's Archive for Clinical and Experimental Ophthalmology, vol. 248, no. 12, pp. 1719–1728, 2010. View at: Publisher Site  Google Scholar
 R. F. Spaide, H. Koizumi, and M. C. Pozonni, “Enhanced depth imaging spectraldomain optical coherence tomography,” The American Journal of Ophthalmology, vol. 146, no. 4, pp. 496–500, 2008. View at: Publisher Site  Google Scholar
 R. F. Spaide, “Enhanced depth imaging optical coherence tomography of retinal pigment epithelial detachment in agerelated macular degeneration,” The American Journal of Ophthalmology, vol. 147, no. 4, pp. 644–652, 2009. View at: Publisher Site  Google Scholar
 B. Považay, B. Hermann, A. Unterhuber et al., “Threedimensional optical coherence tomography at 1050 nm versus 800 mn in retinal pathologies: enhanced performance and choroidal penetration in cataract patients,” Journal of Biomedical Optics, vol. 12, no. 4, Article ID 041211, 2007. View at: Publisher Site  Google Scholar
 W. Rahman, F. K. Chen, J. Yeoh, P. Patel, A. Tufail, and L. Da Cruz, “Repeatability of manual subfoveal choroidal thickness measurements in healthy subjects using the technique of enhanced depth imaging optical coherence tomography,” Investigative Ophthalmology and Visual Science, vol. 52, no. 5, pp. 2267–2271, 2011. View at: Publisher Site  Google Scholar
 T. Fujiwara, Y. Imamura, R. Margolis, J. S. Slakter, and R. F. Spaide, “Enhanced depth imaging optical coherence tomography of the choroid in highly myopic eyes,” The American Journal of Ophthalmology, vol. 148, no. 3, pp. 445–450, 2009. View at: Publisher Site  Google Scholar
 R. L. Wong, P. Zhao, and W. W. Lai, “Choroidal thickness in relation to hypercholesterolemia on enhanced depth imaging optical coherence tomography,” Retina, vol. 33, no. 2, pp. 423–428, 2013. View at: Publisher Site  Google Scholar
 V. Kajić, M. Esmaeelpour, B. Považay, D. Marshall, P. L. Rosin, and W. Drexler, “Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model,” Biomedical Optics Express, vol. 3, no. 1, pp. 86–103, 2012. View at: Publisher Site  Google Scholar
 M. Esmaeelpour, S. Brunner, S. AnsariShahrezaei et al., “Choroidal thinning in diabetes type 1 detected by 3dimensional 1060 nm optical coherence tomography,” Investigative Ophthalmology and Visual Science, vol. 53, no. 11, pp. 6803–6809, 2012. View at: Publisher Site  Google Scholar
 V. Kajić, M. Esmaeelpour, C. Glittenberg et al., “Automated threedimensional choroidal vessel segmentation of 3D, 1060 nm OCT retinal data,” Biomedical Optics Express, vol. 4, no. 1, pp. 134–150, 2013. View at: Publisher Site  Google Scholar
 J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, “Automatic measurements of choroidal thickness in EDIOCT images,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '12), pp. 5360–5363, San Diego, Calif, USA, September 2012. View at: Publisher Site  Google Scholar
 J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, “Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images,” Biomedical Optics Express, vol. 4, no. 3, pp. 397–411, 2013. View at: Publisher Site  Google Scholar
 H. Lu, N. Boonarpha, M. T. Kwong, and Y. Zheng, “Automated segmentation of the choroid in retinal optical coherence tomography images,” in Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '13), pp. 5869–5872, Osaka, Japan, July 2013. View at: Publisher Site  Google Scholar
 M. K. Garvin, M. D. Abràmoff, R. Kardon, S. R. Russell, X. Wu, and M. Sonka, “Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3D graph search,” IEEE Transactions on Medical Imaging, vol. 27, no. 10, pp. 1495–1505, 2008. View at: Publisher Site  Google Scholar
 http://www.heidelbergengineering.com/us/wpcontent/uploads/hra2aquiretheperfectimage.pdf.
 R. Bellman, “The theory of dynamic programming,” Bulletin of the American Mathematical Society, vol. 60, pp. 503–515, 1954. View at: Publisher Site  Google Scholar
 M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision, Brooks/Cole, Albany, NY, USA, 1999.
 S. Arivazhagan and L. Ganesan, “Texture classification using wavelet transform,” Pattern Recognition Letters, vol. 24, no. 910, pp. 1513–1521, 2003. View at: Publisher Site  Google Scholar
 G. van de Wouwer, P. Scheunders, and D. van Dyck, “Statistical texture characterization from discrete wavelet representations,” IEEE Transactions on Image Processing, vol. 8, no. 4, pp. 592–598, 1999. View at: Publisher Site  Google Scholar
 C. E. Rasmussen, “The infinite Gaussian mixture model,” Advances in Neural Information Processing Systems, vol. 12, pp. 554–560, 2000. View at: Google Scholar
 D. Greig, B. Porteous, and A. H. Seheult, “Exact maximum a posteriori estimation for binary images,” Journal of the Royal Statistical Society B, vol. 51, no. 2, pp. 271–279, 1989. View at: Google Scholar
 Y. Boykov and G. FunkaLea, “Graph cuts and efficient ND image segmentation,” International Journal of Computer Vision, vol. 70, no. 2, pp. 109–131, 2006. View at: Publisher Site  Google Scholar
 Y. Boykov and V. Kolmogorov, “Computing geodesics and minimal surfaces via graph cuts,” in Proceedings of the 9th IEEE International Conference on Computer Vision, pp. 26–33, October 2003. View at: Google Scholar
 S. Geman and D. Geman, “Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6, no. 6, pp. 721–741, 1984. View at: Google Scholar
 A. V. Goldberg and R. E. Tarjan, “New approach to the maximumflow problem,” Journal of the ACM, vol. 35, no. 4, pp. 921–940, 1988. View at: Publisher Site  Google Scholar
 L. R. Ford and D. R. Fulkerson, “Maximal flow through a network,” Canadian Journal of Mathematics, vol. 8, pp. 399–404, 1956. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Hajar Danesh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.