Table of Contents Author Guidelines Submit a Manuscript
Journal of Healthcare Engineering
Volume 2019, Article ID 2745183, 10 pages
Research Article

Automatic Optic Disc Segmentation Based on Modified Local Image Fitting Model with Shape Prior Information

1College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110819, China
2Faculty of Robot Science and Engineering, Northeastern University, Shenyang, Liaoning 110819, China

Correspondence should be addressed to Xiaosheng Yu; moc.361@7gnehsoaixuy and Chengdong Wu; moc.361@uen_gnodgnehcuw

Received 29 June 2018; Revised 18 October 2018; Accepted 15 November 2018; Published 14 March 2019

Academic Editor: Cesare Valenti

Copyright © 2019 Yuan Gao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Accurate optic disc (OD) detection is an essential yet vital step for retinal disease diagnosis. In the paper, an approach for segmenting OD boundary without manpower named full-automatic double boundary extraction is designed. There are two main advantages in it. (1) Since the performances and the computational cost produced by iterations of contour evolution of active contour models- (ACM-) based approaches greatly depend on the initialization, this paper proposes an effective and adaptive initial level set contour extraction approach using saliency detection and threshold techniques. (2) In order to handle unreliable information generated by intensity in abnormal retinal images caused by diseases, a modified LIF approach is presented by incorporating the shape prior information into LIF. We test the effectiveness of the proposed approach on a publicly available DIARETDB0 database. Experimental results demonstrate that our approach outperforms well-known approaches in terms of the average overlapping ratio and accuracy rate.

1. Introduction

Optic disc (OD) is a bright yellowish approximately circular or oval-shaped object in the retinal images [1], as shown in Figure 1.

Figure 1: Major structures of the optic disc. Red line: the optic disc boundary.

Accurate OD localization and segmentation play an important role in retinal image analysis and eye diseases diagnosis. For instance, the localization of the OD is a crucial step for fovea detection, vessel tracking, measurement, and automated diabetic retinopathy (DR) screening [2]. Meanwhile, the segmentation of the OD can be used for diagnosing other diseases including glaucoma, papilledema, hypertensive retinopathy, and neovascularization of the disc (NVD) [3, 4]. However, in many real applications, there are some challenging problems for OD segmentation due to the complex OD appearance caused by some anomalies, such as myelinated nerve fibers, peripapillary atrophy (PPA), blood vessels covered, and poor image quality. Hence, many scholars have been proposing a series of approaches to improve the precision of OD boundary extraction. These approaches can be divided into four categories including classification-based [59], template-based matching [1017], morphology-based [1820], active contour models- (ACM-) based approaches [15, 2124].

Plenty of classification-based OD boundary extraction methods have been presented by Cheng et al. [5], Dutta et al. [6], Tan et al. [7], and Zhou et al. [8, 9]. They utilized image pixel-level features or superpixel-level features extracted from retinal fundus images to segment OD. However, these approaches are easy to be influenced by sample size. Namely, the segmentation results of OD have a larger bias if there is only a small amount of training data. Besides, it is also time consuming when dealing with a large amount of training data.

Template-based matching methods consider the shape prior information of the OD, i.e., the circular or elliptical shape, to match the edge maps extracted from retinal fundus images [1017]. However, these methods always fail to detect the ODs with varied shapes.

Some morphology techniques are used to extract OD boundary, e.g., Reza et al. [18] and Welfer et al. [19]. In these approaches, the shape and bright of the OD are modeled by some morphology techniques. Nevertheless, the main disadvantage of these approaches is that bright lesions can affect their performance.

Srivastava et al. [20] applied a deep neural network composed of (unsupervised) stacked autoencoders followed by a supervised layer to distinguish OD from retinal fundus images. But it cannot deal well with the problem when the PPA is very similar to the OD.

Compared with the aforementioned approaches, ACM will obtain an excellent OD segmentation result due to the combination of the profound mathematical properties and prior knowledge of the OD. Hence, ACM-based approaches have become the most promising technique to detect the OD boundary [25]. Lee and Brady [21] firstly proposed a gradient vector flow (GVF) base active contour model for extracting the optic disc boundary with a fixed size initial contour followed by reducing the effect produced by high gradient at vessel locations. Mendels [22] presented a novel active contour approach using the gradient vector-flow-driven contour initialized manually to determine the OD boundary after preprocessing the image based on local minima detection and morphological filtering. A modified version of the conventional level-set method proposed by Wong et al. [15] is used to obtain the OD boundary with a constant scale initial contour from the red channel, and the contour is subsequently smoothened by strictly fitting a ellipse. Yu et al. [23] applied a fast, hybrid level set model, in which the deformable contour is driven by the local edge vector and the region information converging to the true optic disc boundary based on fixed size initial contour determined by experience. A variational-level set deformable model designed by Esmaeili et al. [24] has higher convergence property and better computational efficiency compared with other segmentation active contour models when extracting the OD boundary with an empirical estimation initial contour around the detected OD center. These ACM-based methods can accurately segment ODs with strong boundary, but they are always influenced by intensity inhomogeneities and blood vessels occlusion which are highly sensitive to interferences around the boundary, especially for bright lesions adjacent to the boundary of ODs, reducing their performance.

Seen from the above-mentioned OD detection methods, although the exiting ACM-based approaches can achieve better performance than classification-based approaches [59], template-based matching approaches [1017] and morphology-based [1820], most of ACM [15, 2124] evolving the contour using the imprecise initial contour which is labeled by hand or is set based on fixed size. It not only reduces the performance for ACM but also generates the expensive computational cost. Besides, these ACM-based methods are misguided by unreliable information generated by intensity for extreme situation in abnormal retinal images caused by diseases, e.g., blurry OD boundary, bright peripapillary atrophy interference, and thick blood vessel coverage. They also need to remedy the insufficient information lost through image preprocessing which has been changed along with the different segmentation methods, making the key information lost, and have a complex operation. To address these issues, this paper proposes a novel approach by combining the local image fitting energy and shape prior information to extract OD boundary. The main contributions are as follows: (1) an automatic and robust adaptive initial level set contour extraction method by combining saliency detection and threshold techniques is designed to achieve the optimized contour evolution. (2) A novel ACM-based approach named local image fitting model with oval-shaped constraint (LIFO) is presented, which integrates the model with oval-shaped constraint into a united framework remedying the deficiency of only considering the information of intensity.

2. Methods

2.1. Optic Disc Localization

In this paper, we use our previous work [26] to locate the OD. In [26], a series of OD candidates can be firstly extracted using morphological opening by reconstruction. Then, a set of features are used to distinguish the true optic disc from the nonoptic disc candidates (for more details, refer [26]).

2.2. Optic Disc Segmentation
2.2.1. Rough Boundary Extraction of the OD

Based on the cropped region of interest around optic disc, we can further extract the optic disc boundaries. Since the contour initialization is the basic step to initialize the proposed active contour model, we propose a novel and robust contour initialization approach by combining saliency detection and threshold techniques together in this paper. The details are as follows.

Since the optic disc region is usually of a brighter pallor than the surrounding retinal areas, it can be regarded as a salient objective in retinal fundus images. Recently, inspired by saliency detection technique which aims at finding out the most important part of an image, we adopt a cellular (i.e., superpixel) automata-based saliency detection approach [27] by taking both global color and spatial distance matrices into consideration to contour initialization. First, cellular automata-based saliency detection approach [27] is done on the tailored image. Figure 2(a) is the obtained saliency map in which the corresponding output saliency value of each superpixel is continuous between 0 and 1, as shown in Figure 2(b). Then, a mean filter is found to be a good choice [5] which is then applied on the saliency map to achieve smoothed map values, as shown in Figure 2(c). Next, the smoothed map values are then used to acquire the binary decisions for all the pixels with a threshold. In our experiment, we obtain the threshold by Otsu’s thresholding and assign 1 and 0 to optic disc and nonoptic disc. After we obtain binary decisions for all the pixels, the values with 1 are regarded as object (optic disc) and 0 as background. Finally, the largest connected object (i.e., the connected region with the largest number of the pixels) can be obtained through morphological operation, as shown in Figure 2(d). And its boundary is used as the raw estimation of the optic disc, i.e., the optic disc initial contour in green, as shown in Figure 2(e).

Figure 2: Contour initialization. (a) Cropped ROI around optic disc; (b) saliency detection result; (c) smoothed image of (b); (d) the largest connected object; (e) optic disc initial contour in green.
2.2.2. Accurate Boundary Curve Extraction

Considering the intensity inhomogeneity is a frequently occurring phenomenon in the optic disc region [28]; the optic disc boundary extracted by general segmentation methods is usually inaccurate due to intensity inhomogeneity caused by imperfection of image devices or illumination variations. In order to deal with this problem, the local image fitting (LIF) model presented by Zhang et al. [28] is introduced; it defines local image fitting energy in a variational formulation which incorporates local intensity information into the active contour model. The LIF model can be described as follows:wherewhere denotes an input image; is a local fitted image (LFI) formulation, and are, respectively, defined as local mean near the point x described by equations (3) and (4). x is the variable to express the location information of pixel for global, is the image domain, is a level set function, is the Heaviside function, and is a rectangular window function defined in [28].

Considering that the fundamental anatomical structure of the OD, e.g., it is a bright approximately circular or elliptic region, we can regard the anatomical structure as a shape prior constraint and take it into our model. In this paper, we incorporate both the smoothing item and an oval-disc prior constraint into LIF model, and the novel model named local image fitting model with oval-shaped constraint (LIFO) is proposed for OD boundary extraction. The model can remedy insufficiency of LIF, such as the LIF model will fail to extract the OD boundary with some blood vessels as shown in Figure 3(b). Seen from the result in Figure 3(c), the novel model overcomes the influence of blood vessels and intensity inhomogeneities achieving a precise OD boundary extraction of Figure 3(a).

Figure 3: The result of OD boundary extraction obtained by LIF model and LIFO model, respectively; the ground truth is marked with a green line.

Seen from the above results, it is necessary to introduce the smoothing item and shape prior information into LIF model aiming to acquire a whole boundary of the OD. They can be formulated as follows:wherewhere is the gradient operator; is the smooth Dirac function; and are, respectively, x-coordinate and y-coordinate for global pixel information x; and are oval center coordinates; is the angle of rotation; denotes scaling factors of semimajor axis length; and is defined as semiminor axis length. is the level set based on ellipse shape. Both of them are constantly changed with the curve evolution. In fact, the purpose for calculating equation (5) is to acquire the level set which is similar to . The novel model named LIFO can be obtained by combining equations (1) and (5) into a unified framework:where is the constraint coefficient for ellipse which decides the weight of elliptic constraint and is the coefficient of the weighted length of zero level curve of .

There are three terms in the LIFO model (equation (10)) and each of them has its unique function to deal with different problems in OD boundary extraction. The first term is used to deal with the commonly occurred phenomenon in the optic disc regions that are always influenced by intensity inhomogeneity. The second term is the smooth item, which is used to handle drastic protuberance and sunken for evaluated contours by penalizing arc length of zero level contour of . The third term is the oval-shaped constraint term for ensuring the evaluated contour which can satisfy with the physical anatomical structure of optic disc, reducing the impact of complex environments. The LIFO model can be solved by the standard gradient descent method [28]. After a series of calculations, the solution is obtained in Appendix.

The flow diagram for segmentation of the OD is as follows:(1)Initialization: , , , , , , (the width and the height are, respectively, the width and the height of the cropped region for the original image), the level set functions , , and and denote iterations.(2)Update and , respectively, using equations (3) and (4).(3)Update using equation (2).(4)Using the standard gradient descent method, evolve the parameters of elliptical level set of the OD including , , , , according to equations (A.1)–(A.5); if , , , , satisfy the stationary condition, then stop; else and return to Step 4.(5)Update using equation (8).(6)Evolve the level set functions, according to equation (A.6). If satisfy the stationary condition, stop; otherwise, and return to Step 2.

3. Experimental Results

In this section, the public Standard Diabetic Retinopathy Database “Calibration Level 0” (DIARETDB0) [29] and the public dataset of retinal images namely DRISHTI-GS [30] are applied to verify the availability of our method. The DIARETDB0 and DRISHTI-GS are available and can be downloaded from the web pages and database is made up of 130 RGB color fundus images of which 20 are normal and 110 are abnormal (illness) with the fixed resolution and 50° field of view. The ground truth is collected from two ophthalmologists. The final ground truth is acquired by averaging boundary results extracted from two ophthalmologists. The DRISHTI-GS dataset totally has 101 images of which 31 are normal and 70 are abnormal (illness). These images are produced with 30° degree field of view and have a resolution of . For each image, the OD is correctly marked by four glaucoma experts. To compensate for interobserver marking variations, we also derive a majority voting manual marking as the final ground truth indicating that agreement among at least three experts [30] to qualitatively evaluate the proposed method.

Seen from Figure 4, compared with different contour evolution approaches using adaptive initial contour and different initial circular contours based on the fundamental anatomical structure of the OD, there are some advantages for the proposed approach. First, most of ACM-based approaches are sensitive to the initialization of the contour [32]. However, the proposed initial contour can better guide the motion of the active contour since it is close to the ground truth of OD boundary. Second, the adopted initial contour which is near the OD boundary can reduce iterations of contour evolution. Therefore, it can reduce the computational cost [33, 34]. Furthermore, compared with the original LIF [28], our approach is more robust to the influence caused by the blood vessels due to the fact that the oval-shaped constraint is incorporated into our model.

Figure 4: The comparisons for different segmentation models with different initial contours and Hough transform method. They, respectively, show the comparison results based on adaptive initial contour and manual initial circular contour drawing outside of the OD, inside of the OD, and intersect of the OD. The ground truth is marked with a green line. (a) Initial level set contour. (b) Presented LIFO. (c) LIF [28]. (d) Hough transform [31].

The criterion is adopted to further assess the availability of LIFO model with different initial contours; it is considered that the overlapping ratio T which is computed based on the overlapping area between the true optic disc region in the ground truth and the detected optic disc region is no less than for successful segmentation in terms of [11]. The accuracy ratio is the percentage ratio of successfully classified images to the total number of images. The overlapping ratio T is defined as where and are, respectively, the area of ground truth and the area extracted by the methods. Table 1 shows accuracy rate acquired by different initial contours.

Table 1: Performance measurement based on overlapping areas between different initial contours on the DIARETDB0 database and the DRISHTI-GS database.

Seen from Table 1, the proposed method achieves the best segmentation result with adaptive initial contour, and the accuracy rate is, respectively, 96.30% and 96.10% on the DIARETDB0 database and the DRISHTI-GS database.

In order to better verify the effectiveness of the proposed method, we compare our method with some related and newest approaches for segmentation in medical image processing area such as Hough transform method [31], modified radial symmetry method (MRS) [35], GVF method [36], Chan-Vese (CV) ACM [37], LIF ACM [28], and LSACM ACM [38]. The different segmentation results obtained by all five methods from retinal images are given in Figure 5, in which the green line denotes the ground truth obtained from the experts’ marking and the red line represents some segmentation results extracted by different approaches. The examples of the OD having peripapillary atrophy are shown in the first three columns, and the OD with irregular shape and high gradient variations is shown in the fourth column. The Hough transform and the GVF model fail to extract the whole OD boundary due to the fact that they are sensitive to the varying of local gradient. Although MRS can achieve more accurate result than Hough transform, it ignores that the OD is an approximately circular or elliptic region rather than rigid circular region. However, the CV model models image as piecewise constant function which fails to handle intensity inhomogeneity in retinal image, and thereby achieves unsatisfactory segmentation result. Although the LIF model can deal with these local gradient variations well compared to GVF and Hough transform and reduce the influence of intensity inhomogeneity because of considering local intensity information; it is severely influenced by blood vessel covering the OD surface. The LSACM model also can handle the intensity inhomogeneity and achieve more integrated OD boundary compared to the LIF model because it models the objects as Gaussian distributions of different means and variances; however, it is defeated by blood vessels and PPA obtaining a deficient segmentation result. Seen from the aforementioned methods, our method performs better and captures the whole OD boundary, which overcomes the influence caused by intensity inhomogeneity, PPA, and blood vessels. The fifth column shows a successful result segmented by LIFO model in blurry OD region with smooth transition boundary. This is mainly due to the fact that the prior shape information in some regions is a stronger cue than the intensity information. Therefore, combining the prior information and intensity information together can obtain the smooth and precise OD boundary.

Figure 5: OD segmentation results: (a) original image with the ground truth; (b) adaptive initialized contour; (c) Hough transform results [31]; (d) MRS results [35]; (e) GVF model results [36]; (f) CV model results [37]; (g) LIF model results [28]; (h) LSACM model results [38]; (i) proposed LIFO model results. Green color indicates boundary marked by the expert and red color indicates achieved boundary by a method.

Table 2, respectively, shows the average overlapping ratio and accuracy rate acquired by different models.

Table 2: Performance measurement based on overlapping areas between the proposed approach and other segmentation approaches on the DIARETDB0 database and DRISHTI-GS database.

As seen from Table 2, we can clearly see that our method can get a better performance from DIARETDB0/DRISHTI-GS compared with other methods in terms of average overlapping ratio 66.59%/65.61% and accuracy rate 96.30%/96.10% for successful segmentation in retinal images including normal and abnormal (illness). The average overlapping ratio of segmentation obtained by proposed method in retinal image for normal/abnormal is 67.33% and 66.25%/65.53% and 64.87%; the accuracy rate of segmentation obtained by the proposed method in retinal image for normal/abnormal is 98.40% and 98.90%/95.90% and 94.90% on the DIARETDB0 and the DRISHTI-GS, respectively.

Besides, we also use an important evaluation metric F-score (F) which is the harmonic mean of precision and recall between the achieved boundary by the method and ground truth to test the performance of the proposed model. The pixelwise precision and recall values are, respectively, defined aswhere true positive (tp) indicates the number of pixels in the coverage areas between the ground truth and achieved segmented area by the methods; false positive (fp) expresses the number of pixels in the area where the pixel is classified only in the segmented area by the methods and is excluded belonging to the ground truth; false negative (fn) is the number of pixels in the area where a pixel is classified only in the ground truth and is excluded belonging to the segmented area by the methods. Then, the single performance measure, namely, F-score (F) is computed and defined as

The value of F-score always lies between 0 and 1 and will be high if the performance of method is good.

Table 3 depicts the quantitative assessment for segmentation results in terms of the F-score. The best and the worst achieved by the proposed method are, respectively, the best case and the worst case for fundamental results of the optical disc from the DIARETDB0 and the DRISHTI-GS. Seen from Table 3, it can be inferred that our method has a significant improvement in the segmentation results compared to others methods.

Table 3: Performance measurement based on F-score between the proposed approach and other segmentation approaches on the DIARETDB0 database and DRISHTI-GS database.

4. Conclusions

In this paper, we design a strategy to accurately segment OD boundary without manpower. First, an automatic and robust adaptive initial level set contour extraction method consisting of saliency detection and threshold techniques is presented for making the contour evolution. Then, in order to remedy the deficiency that only considers the intensity and ignores the prior information for OD shape, an excellent local image fitting model with oval-shaped constratint (LIFO) is presented to extract the whole and precise OD boundary. Comparing with the original LIF model only based on intensity information, the LIFO model uses both of the intensity information and shape information which has the following advantages. First, the original model is easily influenced by PPA, blood vessels, and noise due to only considering the intensity information. On the contrary, the proposed model can overcome these issues by using both of the intensity information and the shape prior information without any preprocessing. Second, the proposed model introduces the shape prior information based on the physical anatomical structure of the optic disc, and it can extract the whole boundary of the optic disc especially for the irregular shape of the optic disc. The experimental results demonstrate the availability of the proposed method. Now, the deep learning has attracted attention and achieves a good performance when the number of training samples is enough. However, it is hard to collect enough data in medical field such as the retinal fundus images, which will greatly reduce the performance of model. That is the main reason why we did not employ the deep learning technique to segment the optic disc and optic cup. In the future, we will try to use the deep learning approaches on the larger database.


The LIFO model can be solved by the standard gradient descent method [28]. After a series of calculations, the solution is obtained as follows:where , , , , continually vary along with the changing of information in image and t is the time step of the experiment.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.


This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61701101, U1713216, 61803077, and 61603080, the National Key Robot Project 2017YFB1300900, and the Fundamental Research Fund for the Central Universities of China N172603001 and N172604004.


  1. K. Akita and H. Kuga, “A computer method of understanding ocular fundus images,” Pattern Recognition, vol. 15, no. 6, pp. 431–443, 1982. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. Wang, G. Ji, P. Lin, and E. Trucco, “Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition,” Pattern Recognition, vol. 46, no. 8, pp. 2117–2133, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Xu, O. Chutatape, E. Sung, C. Zheng, and P. C. T. Kuan, “Optic disk feature extraction via modified deformable model technique for glaucoma analysis,” Pattern Recognition, vol. 40, no. 7, pp. 2063–2076, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. B. Dashtbozorg, A. M. Mendonça, and A. Campilho, “Optic disc segmentation using the sliding band filter,” Computers in Biology and Medicine, vol. 56, pp. 1–12, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. J. Cheng, J. Liu, Y. Xu et al., “Superpixel classification based optic disc and optic cup segmentation for glaucoma screening,” IEEE Transactions on Medical Imaging, vol. 32, no. 6, pp. 1019–1032, 2013. View at Publisher · View at Google Scholar
  6. M. K. Dutta, A. K. Mourya, A. Singh, M. Parthasarathi, R. Burget, and K. Riha, “Glaucoma detection by segmenting the super pixels from fundus colour retinal images,” in Proceedings of 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (Med Com), pp. 86–90, Noida, India, 2014.
  7. N.-M. Tan, Y. Xu, W. B. Goh, and J. Liu, “Robust multi-scale superpixel classification for optic cup localization,” Computerized Medical Imaging and Graphics, vol. 40, pp. 182–193, 2015. View at Publisher · View at Google Scholar · View at Scopus
  8. W. Zhou, C. D. Wu, D. L. Chen, Z. Z. Wang, Y. G. Yi, and W. Y. Du, “Automatic microaneurysm detection using the sparse principal component analysis based unsupervised classification method,” IEEE Access, vol. 5, no. 1, pp. 2169–3536, 2017. View at Publisher · View at Google Scholar · View at Scopus
  9. W. Zhou, C. Wu, Y. Yi, and W. Du, “Automatic detection of exudates in digital color fundus images using superpixel multi-feature classification,” IEEE Access, vol. 5, no. 1, pp. 17077–17088, 2017. View at Publisher · View at Google Scholar · View at Scopus
  10. A. Pinz, S. Bernogger, P. Datlinger, and A. Kruger, “Mapping the human retina,” IEEE Transactions on Medical Imaging, vol. 17, no. 4, pp. 606–619, 1988. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Lalonde, M. Beaulieu, and L. Gagnon, “Fast and robust optic disc detection using pyramidal decomposition and Hausdorff-based template matching,” IEEE Transactions on Medical Imaging, vol. 20, no. 11, pp. 1193–1200, 2001. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Sekhar, W. Al-Nuaimy, and A. K. Nandi, “Automated localisation of retinal optic disk using Hough transform,” in Proceedings of 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1577–1580, Paris, France, May 2008.
  13. A. Bhuiyan, R. Kawasaki, T. Y. Wong, and R. Kotagiri, “A new and efficient method for automatic optic disc detection using geometrical features,” in Proceedings of World Congress on Medical Physics and Biomedical Engineering, pp. 1131–1134, Munich, Germany, September 2009.
  14. A. Aquino, M. E. Gegúndez-Arias, and D. Marín, “Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques,” IEEE Transactions on Medical Imaging, vol. 29, no. 11, pp. 1860–1869, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. D. Wong, J. Liu, J. Lim et al., “Level-set based automatic cup-to-disc ratio determination using retinal fundus images in ARGALI,” in Proceedings of 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pp. 2266–2269, Vancouver, BC, Canada, August 2008.
  16. S. Morales, V. Naranjo, J. Angulo, and M. Alcaniz, “Automatic detection of optic disc based on PCA and mathematical morphology,” IEEE Transactions on Medical Imaging, vol. 32, no. 4, pp. 786–796, 2013. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Roychowdhury, D. Koozekanani, S. Kuchinka, and K. Parhi, “Optic disc boundary and vessel origin segmentation of fundus images,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 6, pp. 1562–1574, 2017. View at Publisher · View at Google Scholar · View at Scopus
  18. A. W. Reza, C. Eswaran, and S. Hati, “Automatic tracing of optic disc and exudates from color fundus images using fixed and variable thresholds,” Journal of Medical Systems, vol. 33, no. 1, pp. 73–80, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. D. Welfer, J. Scharcanski, and D. R. Marinho, “A morphologic two-stage approach for automated optic disk detection in color eye fundus images,” Pattern Recognition Letters, vol. 34, no. 5, pp. 476–485, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Srivastava, J. Cheng, D. W. K. Wong, and J. Liu, “Using deep learning for robustness to parapapillary atrophy in optic disc segmentation,” in Proceedings of 2015 IEEE 12th International Symposium on Biomedical Imaging, pp. 768–771, Brooklyn, NY, USA, April 2015.
  21. S. Lee and M. Brady, Optic Disk Boundary Detection, Springer London, London, UK, 1991.
  22. F. Mendels, “Identification of the optic disk boundary in retinal images using actvive coutours,” in Proceedings of Irish Machine Vision and Image Processing Conference, pp. 103–115, Dublin, Ireland, September 1999.
  23. H. Yu, E. S. Barriga, C. Agurto et al., “Fast localization and segmentation of optic disk in retinal images using directional matched filtering and level sets,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 4, pp. 644–657, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. M. Esmaeili, H. Rabbani, and A. M. Dehnavi, “Automatic optic disk boundary extraction by the use of curvelet transform and deformable variational level set model,” Pattern Recognition, vol. 45, no. 7, pp. 2832–2842, 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. M. C. V. S. Mary, E. B. Rajsingh, J. K. K. Jacob, D. Anandhi, U. Amato, and S. E. Selvan, “An empirical study on optic disc segmentation using an active contour model,” Biomedical Signal Processing and Control, vol. 18, pp. 19–29, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. W. Zhou, C. Wu, Y. Gao, and X. Yu, “Automatic optic disc boundary extraction based on saliency object detection and modified local intensity clustering model in retinal images,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. E100-A, no. 9, pp. 2069–2072, 2017. View at Publisher · View at Google Scholar · View at Scopus
  27. Y. Qin, H. Lu, Y. Xu, and H. Wang, “Saliency detection via cellular automata,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 110–119, Boston, MA, USA, June 2015.
  28. K. Zhang, H. Song, and L. Zhang, “Active contours driven by local image fitting energy,” Pattern Recognition, vol. 43, no. 4, pp. 1199–1206, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. T. Kauppi, V. Kalesnykiene, J. K. Kamarainen et al., “DIARETDB0: evaluation database and methodology for diabetic retinopathy algorithms,” Technical Report, Lappeenranta University of Technology, Lappeenranta, Finland, 2008.
  30. J. Sivaswamy, S. R. Krishnadas, G. D. Joshi, M. Jain, and A. U. S. Tabish, “Drishti-G. S.: retinal image dataset for optic nerve head (ONH) segmentation,” in Proceedings of 2014 IEEE 11th International Symposium on Biomedical Imaging, pp. 53–56, Beijing, China, April-May 2014.
  31. R. Chrástek, M. Wolf, K. Donath, G. Michelson, and H. Niemann, “Optic disc segmentation in retinal images,” in Proceedings of Bildverarbeitung für die Medizin, pp. 263–266, Leipzig, Germany, March 2002.
  32. C. Li, R. Huang, Z. Ding, J. C. Gatenby, D. N. Metaxas, and J. C. Gore, “A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI,” IEEE Transactions on Image Processing, vol. 20, no. 7, pp. 2007–2016, 2011. View at Publisher · View at Google Scholar · View at Scopus
  33. L. Wang, C. Li, Q. Sun, D. Xia, and C.-Y. Kao, “Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation,” Computerized Medical Imaging and Graphics, vol. 33, no. 7, pp. 520–531, 2009. View at Publisher · View at Google Scholar · View at Scopus
  34. C. Li, C. Y. Kao, J. C. Gore, and Z. Ding, “Implicit active contours driven by local binary fitting energy,” in Proceedings of 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7, Minneapolis, MN, USA, June 2007.
  35. G. Sciortino, E. Orlandi, C. Valenti, and D. Tegolo, “Wavelet analysis and neural network classifiers to detect mid-sagittal sections for nuchal translucency measurement,” Image Analysis & Stereology, vol. 35, no. 2, pp. 105–115, 2016. View at Publisher · View at Google Scholar · View at Scopus
  36. A. Osareh, M. Mirmehdi, B. Thomas, and R. Markham, “Colour morphology and snakes for optic disc localization,” in Proceedings of the 6th Medical Image Understanding and Analysis Conference, pp. 21–24, Portsmouth, UK, July 2002.
  37. G. D. Joshi, J. Sivaswamy, K. Karan, and R. Krishnadas, “Optic disk and cup boundary detection using regional information,” in Proceedings of IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 948–951, Rotterdam, The Netherlands, April 2010.
  38. K. Zhang, L. Zhang, K.-M. Lam, and D. Zhang, “A level set approach to image segmentation with intensity inhomogeneity,” IEEE Transactions on Cybernetics, vol. 46, no. 2, pp. 546–557, 2016. View at Publisher · View at Google Scholar · View at Scopus