Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2015, Article ID 895267, 15 pages
http://dx.doi.org/10.1155/2015/895267
Research Article

Comparative Study of Retinal Vessel Segmentation Based on Global Thresholding Techniques

1School of Mathematics, Statistics & Computer Science, University of KwaZulu-Natal, Durban 4000, South Africa
2School of Engineering, University of KwaZulu-Natal, Durban 4000, South Africa

Received 8 September 2014; Accepted 13 November 2014

Academic Editor: Chuangyin Dang

Copyright © 2015 Temitope Mapayi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Due to noise from uneven contrast and illumination during acquisition process of retinal fundus images, the use of efficient preprocessing techniques is highly desirable to produce good retinal vessel segmentation results. This paper develops and compares the performance of different vessel segmentation techniques based on global thresholding using phase congruency and contrast limited adaptive histogram equalization (CLAHE) for the preprocessing of the retinal images. The results obtained show that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance.

1. Introduction

Diabetic retinopathy (DR) accounts for about five percent of the causes of blindness globally, representing almost five million blind as stated by World Health Organization [1]. An early detection of DR is ensured through the regular examination of retinal images in diabetic patients, thus reducing the incidence of blindness cases. Automatic vessel segmentation has a great potential to assist ophthalmologists in the early detection of DR [2].

There have been various works done on the segmentation of vessels in retinal images. These works can be classified into two major categories. The first category is the unsupervised methods. This comprises vessel tracking [35], matched filter responses [68], morphology-based techniques [9, 10], and locally adaptive thresholding [11]. The second category is the supervised methods. This category requires manually labeled images for training. This includes the use of neural networks [12], Bayesian classifier [13], k-nearest neighbor classifier [14], and SVM classifier [10, 15], for the classification of the image pixels as either blood vessel or background tissue pixels. The method proposed in this paper belongs to the unsupervised method.

Chaudhuri et al. [6] implemented a matched filter by initially approximating the intensity of gray level profiles of the cross sections of retinal vessels using a Gaussian shaped curve. An Otsu thresholding technique was further applied to the matched filter response image to segment the retinal vessels. Hoover et al. [8] segmented retinal vessels by applying a threshold probing technique combining local vessel attributes with region-based attributes on matched filter response (MFR) image. Compared to [6] where a basic thresholding of an MFR was used, the method proposed by [8] reduced the false positive rate by as much as 15 times. Zhang et al. [16], having identified that the general matched filter responds to both vessels edges and nonvessel edges, extended the general matched filter with the first-order derivative of the Gaussian properties of the retinal vessels. Martinez-Perez et al. [17] applied the combination of scale space analysis and region growing to segment the vasculature. The technique proposed in [17] was, however, unable to segment the thin vessels. Zana and Klein [18] implemented a vessel segmentation method based on the use of mathematical morphology. Although the result achieved in [18] was good, the vascular structures were not always connected to one another. Jiang and Mojon [11] implemented an adaptive local thresholding based on a verification-based multithreshold probing scheme. The proposed technique in [11] was, however, faced with the limitations of some unconnected vascular structures and the inability to detect the thinner vessels.

Moment features were used by Marín et al. [19] for vessel segmentation. A 7D vector composed of gray level and moment invariants-based features for pixel representation was computed, while a neural network classifier is used for the vessel segmentation. Soares et al. [13] generated a feature vector computed from the measurements at different scales of two-dimensional (2D) Gabor wavelet transform on each pixel. Bayesian classifier with Gaussian mixtures was further used to classify the resulting feature space as either a vessel or nonvessel pixel. Staal et al. [14] implemented a ridge-based vessel segmentation method. The retinal image ridges which cooccur approximately with vessel centre-lines were extracted. Primitives in the form of line elements were further composed of the ridges. The feature vectors computed for every pixel were classified using a k-nearest neighbour classifier and sequential forward feature selection. Niemeijer et al. [20] implemented vessel segmentation method based on pixel classification. Each pixel of the green plane of the retinal image was used to construct a feature vector. Consequently, these feature vectors were trained using a kNN-classifier. A filtered output and the pixel values within a neighborhood were compared. The best results were obtained from the filter output. Niemeijer et al. [20] further did the comparative study of the proposed vessel segmentation technique with the techniques proposed in [11, 14]. Fraz et al. [21, 22] implemented a supervised segmentation technique based on ensemble classifier of bootstrapped decision trees for the segmentation retinal vessel network. Lupaşcu et al. [23] implemented a supervised segmentation technique for detecting vessels using Ada-Boost classifier. A feature vector comprising local and spatial properties of the vessels were generated from the responses of various filters (matched filters, Gabor wavelet transform, and Gaussian filter and its derivatives). Ada-Boost classier was further trained and used to classify each pixel as either vessel or nonvessel. Ricci and Perfetti [24] proposed two different automated vessel segmentations based on line operators. The best of the two segmentation methods constructed feature vector for supervised classification using a support vector machine.

Szpak and Tapamo [25] used gradient based approach and level set technique. The proposed technique in [25] was, however, unable to detect the thinner vessels. Vlachos and Dermatas [26] implemented a multiscale line-tracking combined with a morphological postprocessing technique. Wang et al. [27] proposed multiwavelet kernels and multiscale hierarchical decomposition for the segmentation of retinal vessels. Mendonça and Campilho [28] combined differential filters for center-line extraction with morphological operators for the detection of retinal vessel network. Xiao et al. [29] proposed a Bayesian method with spatial constraint with level set for the segmentation of retinal vessels. Yin et al. [30] implemented a probabilistic tracking-based method for vessel segmentation.

Lupascu and Tegolo [31, 32] trained a self-organizing map (SOM) on retinal images. The map was further divided into two classes using -means clustering [31] and modified fuzzy -means [32] techniques. The entire image is fed into SOM again and the class of the best matching unit on SOM is assigned to each pixel. A postprocessed technique based on hill climbing strategy on connected components was used to detect the vessel network. Saffarzadeh et al. [33] implemented a preprocessing phase based on -means followed by the use of multiscale line operators for the detection of retinal vessel network. With the help of k-means, the visibility of the vessels was enhanced and the impact of bright lesions was reduced. The retinal vessels were finally detected using the line detection operator in three scales.

Setiawan et al. [34] used contrast limited adaptive histogram equalization (CLAHE) to enhance the green channel color retinal image in order to enhance color retinal fundus image. The enhancement was achieved using histogram manipulation to get the uniform distribution of the intensity of the green channel. Contrast limited adaptive histogram equalization spreads the intensity distribution and adjusts the intensity of the original image. The red, green, and blue channels were finally combined as an enhanced color retinal image. Phase congruency on the other hand is a technique that is not affected by uneven illumination and contrast of the retinal image. A bank of log-Gabor filters was used by Kovesi to compute the phase congruency of an image and a binary segmentation was obtained by universal thresholding [35, 36].

Amin and Hong [37] implemented the detection of retinal blood vessels using phase congruency at an high speed. Although the technique performed well in terms of speed, there is a need for a higher accuracy rate and a dynamically computed thresholding approach. Tagore et al. [38] used phase congruency to improve the contrast of vessel segments against the retinal background. A hierarchical clustering based histogram thresholding was then used to segment the contrast enhanced vessels. In related development, vessels cross-sectional profiles in the Fourier domain were represented and characterize using phase congruence by Zhu [39]. A bank of Gabor filter was used to transform the input image. The performance of the proposed technique in [39] was only described using visual results.

Although global thresholding technique has been used in [6], it has, however, been said to be inefficient for the retinal vascular segmentation [8, 11]. This might have also resulted from certain limitations of the preprocessing phase [16]. In order to effectively produce good vessel segmentation, there is a need for an efficient preprocessing phase to enhance the vessels, good global thresholding technique, and an efficient postprocessing technique. This paper presents a study on the use different global thresholding techniques combined with different preprocessing and postprocessing techniques. The rest of this paper is organized as follows. Section 2 describes the methods and techniques used in this study. Section 3 explains the experimental setup, results, and discussion, while the conclusion is summarized in Section 4.

2. Methods and Techniques

Retinal fundus images are often characterized by noise due to illumination and contrast variation. Due to this, the use of global thresholding techniques for the detection of vessels in these noisy retinal images becomes challenging. In order to solve this problem, the need for an efficient preprocessing technique is highly desirable. This section describes the two different preprocessing techniques and the different filtering techniques which are used to enhance the vessels. The different thresholding techniques and postprocessing techniques used in this paper are also described in this section. For the purpose of simplification, we group these techniques into two major approaches, namely, CLAHE global-based thresholding approach and phase congruence global-based thresholding approach as described in Algorithms 1 and 3.

Algorithm 1: Algorithm for CLAHE global-based thresholding technique.

Algorithm 2: Algorithm for computing IDM-based threshold.

Algorithm 3: Algorithm for phase congruence global-based thresholding technique.

(1) Preprocessing Phase. The different techniques used in the preprocessing phase are described below.

(a) CLAHE: CLAHE algorithm is used for partitioning the image into contextual regions and it applies the histogram equalization to each one. Figure 1 shows the colored, the gray scale, and the green channel of the retinal fundus image. CLAHE computes the local histogram at each pixel of the retinal image and performs histogram clipping, histogram renormalization, and output pixel mapping to an intensity proportional to its rank within the histogram. Given that is the histogram bin and is the contextual region, the rank for a pixel with intensity is computed as follows: where the clip limit determines the contrast enhancement limit and describes the rank in a clipped histogram. Since each region will have a different number of clipped pixels, it is, however, beneficial to redistribute the part of the histogram that exceeds the clip limit evenly among all histogram bins to normalize the ranks computed in different regions. This normalization is provided by , where is the histogram bin in the different region. The rank of intensity at is computed and scaled to produce a fractional rank , such that .

Figure 1: (a) Colored retinal image (b) Gray Scale Retinal Image. (c) Green channel of the colored retinal image.

The output intensity level is then computed in some grey scale ranging between and as follows:

(b) The phase congruence model proposed in [35, 36] has been very promising in the detection of object boundary in the presence of noise. The green channel is enhanced using phase congruence to minimize retinal image noise due to nonuniform illumination and contrast. Phase congruency is computed as follows: where is the amplitude, is the phase, and is the local energy, given that

In order to apply phase congruence to images, (3) is modified to be as follows: where is the position of the pixel in the green channel of the retinal image, while and are the given scale and orientation, respectively. is the weighing factor for the distributed frequency, while estimates the image noise. The energy is computed using , while is added to the denominator such that the divisor will be nonzero. The visual results of both CLAHE and phase congruence preprocessing techniques can be seen in Figure 2.

Figure 2: (a) Preprocessed retinal image using CLAHE (b) Preprocessed retinal image using phase congruence.

(c) Filters: the resulting images from CLAHE preprocessing technique are still affected to some extent by noise. In order to further enhance the retinal images, different filters are considered. The different filters considered are adaptive filter, average filter, and Gaussian filter. The combination of average filter and Gaussian filter was also used to further enhance the output of CLAHE preprocessing technique. Each of these different filtering approaches was considered in order to investigate their suitability for further enhancement of the retinal image. In related development, the resulting images from phase congruence were also enhance using average filter. The performance of each of the filtering approaches was, however, measured after the final vessel segmentation. The visual results from DRIVE database can be seen in Figures 4, 5, and 7 and those of STARE database in Figures 11 and 12.

(2) Global Thresholding. Automatic thresholding is potentially useful to dynamically select an optimal gray level threshold value for the segmentation of retinal vessels in the image from the background tissue based on their intensity distribution. The different global thresholding techniques studied in this paper are as follows.

(a) Otsu thresholding: global thresholding technique based on Otsu [40] is used on the results computed from phase congruence and CLAHE with filters for the initial estimation of the vessel network. The threshold that minimizes the intraclass variance as a weighted sum of variances of the two classes is explored in Otsu’s method. The weighted sum of variances of two classes is expressed as follows: such that weights describe the probabilities of the two classes separated by a threshold and variances of the classes. The class probability is then computed from the histogram as follows: while the class mean is computed as follows: such that is the value at the center of the th histogram bin. and can likewise be computed on the histogram for bins greater than . Otsu further showed that minimizing the intraclass variance is the same as maximizing interclass variance; thus the desired threshold is given as follows: where and are the means of the first and second group, respectively. The visual result of Otsu threshold on the image obtained from CLAHE preprocessing technique can be seen in Figure 3.

Figure 3: (a) Segmented retinal vessels using CLAHE preprocessing with Otsu threshold. (b) DRIVE database gold standard.
Figure 4: Shows retinal images and their segmentation results obtained through phase congruence using different global-based thresholding techniques. Images (a1), (b1), and (c1) are DRIVE database colored retinal Images. Images (a2), (b2), and (c2) are DRIVE database gold standards. Images (a3), (b3), and (c3) are images segmented using IDM-based threshold values while images (a4), (b4), and (c4) are images segmented using ISODATA threshold values. Images (a5), (b5), and (c5) are images segmented using Otsu threshold values.
Figure 5: Shows different segmentation results obtained through CLAHE with different filters using Otsu thresholding technique. Images (d1), (e1), and (f1) are DRIVE database gold standards. Images (d2), (e2), and (f2) are images segmented using Otsu threshold with Gaussian filter. Images (d3), (e3), and (f3) are images segmented using Otsu threshold with average filter. Images (d4), (e4), and (f4) are images segmented using Otsu threshold with adaptive filter. Images (d5), (e5), and (f5) are images segmented using Otsu threshold with combination of average and Gaussian filters.

(b) ISODATA threshold selection: ISODATA threshold technique divides the histogram of the image output from phase congruence method into two using an initial threshold value . The threshold is computed as follows: where and are the mean values of the two different parts of the histogram. This process continues until .

(c) Inverse difference moment (IDM)-based binary thresholding: image signal statistics, particularly first- and second-order statistics, are good texture feature descriptors used for supervised segmentation techniques. Moments, first-order statistics, are concerned with individual image pixel properties while second-order statistics such as gray level cooccurrence matrix (GLCM) are concerned with individual pixel properties as well as the spatial interdependency of the two pixels at particular relative positions. The IDM texture information is computed using the GLCM of the gray scale of the retinal fundus image. The GLCM for the retinal fundus image is computed in the relative distance between the pixel pair and their relative orientation “” across four directions (horizontal: 0°, diagonal: 45°, vertical: 90°, and antidiagonal: 135°) as where means is the gray level of the pixel , and is defined as The IDM feature across the different distances and varying relative orientation is defined as follows: where is the th entry in a normalized gray scale spatial dependence matrix .

A multiscale IDM feature measurement across the varying distance and relative orientation is used in the computation of an IDM feature matrix as follows: where with orientations , such that = 0°, = 45°, = 90°, and = 135°, with distances . The range measure of is given below as where and is a row vector containing the range of each column of matrix .

The threshold value that will be used for the binarization of the output image from the phase congruence and average filter is computed as follows:

(3) Postprocessing Phase. The different techniques used in the postprocessing phase are described below.

(a) Median filtering and morphological opening: median filter is used to restore the connectivity of several vessel lines by revealing some hidden pixels that belong to vessel lines. It is also used to get rid of the remaining noisy pixels. The choice of applying a 2 × 2 median filter has a good performance. This is referred to as (MO) in Tables 4 and 5. This is followed by the use of morphological opening in removing part of the remaining noisy pixels. The use of morphological opening (MO) alone and the combination of morphological opening and median filter was used. This is referred to as (MOMF) in Tables 4 and 5.

(b) Morphological directional filtering and reconstruction: the morphological directional filtering described in [12] is used to handle the several misclassifications that still remained. Morphological openings with line structuring elements orientation in five various directions, namely, 0, 30, 60, 120, and 150 degrees, are used. This paper adopts length of 1 pixel to keep vessel like structures with length of greater or less than 1. A logical OR for the responses of the five different directions and morphological reconstruction were performed on the image to remove a few erroneous regions before producing the final vessels network. The (MOMF) described in (a) above is combined with morphological directional filtering and morphological reconstruction for the purpose of performance investigation. This is referred to as (ATC) in Tables 4 and 5.

3. Experimental Results and Discussions

Experiment was carried out using Matlab 2010a on an Intel Core i5 2410 M CPU, 2.30 GHz, with 4 GB of RAM. The proposed method was evaluated using the retinal images on the publicly available DRIVE [41] and STARE databases [8]. DRIVE database is made up of 40 images captured with the use of Canon CR5 camera with 24-bit gray scale resolution and a spatial resolution of pixels. The 40 images were divided into two groups. The first group of the DRIVE images is a training set made up of twenty images. The second group is a testing set made up of twenty images. DRIVE database also provides gold standard images as the ground truth for vessel segmentation for the comparative performance evaluation of different vessel segmentation algorithms. STARE database on the other hand consists of retinal images captured with the use of TopCon TRV-50 fundus camera with 24-bit gray scale resolution and spatial resolution of pixels. The database provides 20 coloured retinal images and 20 hand-labeled images as the ground truth for the comparative performance evaluation of different vessel segmentation algorithms.

The outcome of retinal vessel segmentation is a pixel-based classification result. Each pixel is either classified as vessel or background. Different events such as true positive (TP), true negative (TN), false positive (FP), and false negative (FN) take place during the pixel classification. An event is said to be TP if a pixel is correctly segmented as a vessel and TN when a pixel is correctly segmented as background. In related development, an event is said to be FN if a vessel pixel is segmented to be a background and a FP when a background pixel is segmented as a pixel in the vessel. The statistical performance measures commonly used for the evaluation of segmentation techniques are sensitivity, specificity, and accuracy. Sensitivity measure indicates the ability of a segmentation technique to detect the vessel pixels while specificity measure indicates the ability of a segmentation technique to detect background pixels. The accuracy measure, however, indicates the degree of conformity of the segmented retinal image to the ground truth. The measures are described in the equation below as where TP is true positive, TN is true negative, FP is false positive, and FN is false negative.

Table 1 gives an overview of the parameter description and optimum parameter values of the phase congruence technique. In related development, different optimal values were empirically selected for the parameters used in CLAHE global thresholding approaches as described in Table 2.

Table 1: Phase congruence parameter description and optimum parameter values.
Table 2: optimal parameter values for CLAHE global thresholding approaches.

Figure 4 shows retinal images and their segmentation results obtained through phase congruence using different global-based thresholding techniques on DRIVE database. Figure 6 also shows the segmentation result obtained from a diseased retinal from DRIVE database using phase congruence combined with IDM thresholding technique. Figure 12 shows the result of phase congruence-based global thresholding approach on STARE database.

Figure 6: (a) Colored retinal image. (b) Preprocessed image using phase congruence. (c) Segmented retinal image containing vessel network and lesions.
Figure 7: CLAHE combined with ISODATA thresholding technique. It shows the different segmentation results obtained through CLAHE with different filters using ISODATA thresholding technique. Images (g1), (h1), and (i1) are DRIVE database gold standards. Images (g2), (h2), and (i2) are images segmented using ISODATA threshold with Gaussian filter. Images (g3), (h3), and (i3) are images segmented using ISODATA threshold with average filter. Images (g4), (h4), and (i4) are images segmented using ISODATA threshold with adaptive filter. Images (g5), (h5), and (i5) are images segmented using ISODATA threshold with combination of average and Gaussian filters.

Figure 5 shows different segmentation results obtained through CLAHE combined with different filters using Otsu thresholding technique while Figure 7 shows different segmentation results obtained through CLAHE combined with different filters using ISODATA thresholding technique on DRIVE database. Figure 11 also shows the result of CLAHE-based global thresholding approaches on STARE Database.

Figure 8 describes the average sensitivities, specificities, and accuracies of the segmentation results obtained from phase congruence-based global thresholding approaches while Figures 9 and 10 show the average sensitivities, specificities, and accuracies of the segmentation results obtained from CLAHE-based global thresholding approaches on DRIVE database.

Figure 8: Measures the different phase congruence-based global thresholding approaches. It describes the average sensitivity, specificity, and accuracy of the segmentation results obtained through phase congruence using different global-based thresholding techniques. Phase congruence with IDM-based threshold, combined with (MO), gives the average accuracy of 0.94302, average sensitivity of 0.71520, and average specificity of 0.96496.
Figure 9: Measures of CLAHE combined with different filters using Otsu threshold. It describes the average sensitivity, specificity, and accuracy of the segmentation results obtained through CLAHE with Otsu thresholding using different filters. CLAHE with guassian filters, combined with (ATC), gives the best performance of an average accuracy of 0.94980, average sensitivity of 0.67290, and average specificity of 0.97651.
Figure 10: Measures of CLAHE combined with different filters using ISODATA threshold. It describes the average sensitivity, specificity, and accuracy of the segmentation results obtained through CLAHE with ISODATA thresholding using different filters. CLAHE with guassian filters, combined with (ATC), gives the best performance of an average accuracy of 0.94997, average sensitivity of 0.67011, and average specificity of 0.97695.
Figure 11: (a) and (e) are STARE database ground truth. (b) and (e) are images segmented using global threshold with adaptive filter. (c) and (f) are images segmented using global threshold with average filter. (d) and (g) are images segmented using global threshold with Gaussian filter.
Figure 12: (a) STARE database ground truth. (b) Preprocessed image using phase congruence. (c) Retinal image mask. (d) Segmented vessel network using phase congruence-based global thresholding approach.

Table 3 shows the performance of the different global thresholding techniques on DRIVE database. Although CLAHE-based global thresholding approaches have very good accuracies due to the accurate segmented vessels, they, however, possess lower sensitivities due to the inability to segment the thin vessels. CLAHE-based global thresholding approaches are at their best performance when all the postprocessing techniques are combined. The best average sensitivity and accuracy results of CLAHE with Gaussian filter using Otsu threshold are 0.67290 and 0.9498. The next in rank of CLAHE-based preprocessing combined with Otsu threshold is CLAHE with average filter giving the average sensitivity and accuracy results of 0.65349 and 0.9494. CLAHE with average and Gaussian filters gives the average sensitivity and accuracy results of 0.64159 and 0.94269 while CLAHE with adaptive filter gives the average sensitivity and accuracy results of 0.61596 and 0.93678. The best average sensitivity and accuracy results of CLAHE with Gaussian filter using ISODATA threshold are 0.67011 and 0.94997. CLAHE-based preprocessing combined with average filter gives the average sensitivity and accuracy results of 0.61630 and 0.95162 for the ISODATA threshold technique. CLAHE-based preprocessing combined with average and Gaussian filters gives the average sensitivity and accuracy results of 0.60265 and 0.95104 for the ISODATA threshold technique. CLAHE with adaptive filter also gives the average sensitivity and accuracy results of 0.64348 and 0.95209 for the ISODATA threshold technique.

Table 3: Performance of different segmentation methods on DRIVE database.
Table 4: Performance of different segmentation methods on DRIVE database.
Table 5: Performance of different proposed segmentation methods on STARE database.

The best results achieved using phase congruence-based global thresholding approaches are obtained using IDM-based thresholding compared to ISODATA and Otsu thresholding. Phase congruence combined with IDM-based thresholding has very good accuracies due to the accurate segmented vessels and possesses good sensitivities due to the ability to segment some thin vessels. It is, however, still unable to segment the thinnest vessels. The best average accuracy of 0.94302 and average sensitivity of 0.71520 are achieved using morphological opening postprocessing technique. The next in rank of phase congruence-based global thresholding approaches is IDM-based thresholding combined with all postprocessing techniques combined giving average accuracy and sensitivity results of 0.93772 and 0.73910. IDM-based thresholding combined with morphological opening combined with median filter gives average accuracy and sensitivity results of 0.93596 and 0.74247.

Phase congruence combined with IDM-based thresholding generally performed better than all the CLAHE-based global thresholding approaches. Phase congruence combined with IDM-based thresholding also gives better performance compared to Otsu and ISODATA thresholding combined with phase congruence. The performances of Otsu and ISODATA thresholding coupled with phase congruence are, however, at their best when morphological opening and median filter are combine for the post processing phase.

Tables 4 and 5 describe the performances of the best techniques from the different segmentation methods investigated in this paper and other previously published works using DRIVE and STARE databases.

Phase congruence combined with IDM-based thresholding using morphological opening postprocessing technique presents a higher average accuracy rates on DRIVE and STARE databases compared to the previously proposed phase congruence based technique by Amin and Hong [37]. Tagore et al. [38] achieves a lower average accuracy rate on DRIVE database but a higher average accuracy rate compared to the best phase congruence-based global thresholding approach presented in this paper. The technique proposed in [38], however, did not present the sensitivity and specificity measures. In related development, the phase congruence technique proposed by Zhu [39] discussed only the visual performance. Tables 4 and 5 also compare the results obtained in this paper with other results achieved in other literatures.

4. Conclusion

The performance of different vessel segmentation approaches based on combination of different preprocessing techniques, global thresholding, and postprocessing techniques has been investigated. It has also been shown that the combination of preprocessing technique, global thresholding, and postprocessing techniques must be carefully chosen to achieve a good segmentation performance. It is, however, important to state that the paper shows that sensitivity, specificity, and accuracy measures must all be high to ascertain a good segmentation performance. It was also shown that phase congruence combined with IDM-based thresholding generally performs better compared to phase congruence combined with ISODATA and Otsu threshold. Phase congruence combined with IDM-based thresholding is at its best on DRIVE database but did not have a better performance compared to the best of the CLAHE-based global thresholding approaches on STARE database. CLAHE-based global thresholding approaches were, however, shown to have maintained high accuracy rates across DRIVE and STARE databases. Although good accuracy and specificity rates were achieved, the sensitivity rate shows that global thresholding approach is still limited at efficiently segmenting the thin vessels. Our future work shall investigate the use of more robust segmentation techniques for the detection of both large and thin vessels in retinal images.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to thank Staal et al. [14, 41] and Hoover et al. [8] for making their databases publicly available.

References

  1. “World Health Organization: Prevention of Blindness and Visual Impairment,” http://www.who.int/ blindness/causes/priority/en/index8.html.
  2. D. C. Klonoff and D. M. Schwartz, “An economic analysis of interventions for diabetes,” Diabetes Care, vol. 23, no. 3, pp. 390–404, 2000. View at Publisher · View at Google Scholar · View at Scopus
  3. O. Chutatape, L. Zheng, and S. M. Krishman, “Retinal blood vessel detection and tracking by matched Gaussian and Kalman filters,” in Proceedings of the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 6, pp. 3144–3149, Hong Kong, October-November 1998. View at Publisher · View at Google Scholar
  4. L. Gagnon, M. Lalonde, M. Beaulieu, and M. C. Boucher, “Procedure to detect anatomical structures in optical fundus images,” in Medical Imaging: Image Processing, vol. 4322 of Proceedings of the SPIE, pp. 1218–1225, San Diego, Calif, USA, 2001. View at Publisher · View at Google Scholar
  5. Y. A. Tolias and S. M. Panas, “A fuzzy vessel tracking algorithm for retinal images based on fuzzy clustering,” IEEE Transactions on Medical Imaging, vol. 17, no. 2, pp. 263–273, 1998. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989. View at Publisher · View at Google Scholar · View at Scopus
  7. L. Gang, O. Chutatape, and S. M. Krishnan, “Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 2, pp. 168–172, 2002. View at Publisher · View at Google Scholar · View at Scopus
  8. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, 2000. View at Publisher · View at Google Scholar · View at Scopus
  9. T. Walter and J. C. Klein, “Segmentation of color fundus images of the human retina: detection of the optic disc and the vascular tree using morphological techniques,” in Medical Data Analysis, J. Crespo, V. Maojo, and F. Martin, Eds., Lecture Notes in Computer Science, pp. 282–287, Springer, Berlin, Germany, 2001. View at Google Scholar
  10. L. Xu and S. Luo, “A novel method for blood vessel detection from retinal images,” BioMedical Engineering Online, vol. 9, article 14, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. X. Jiang and D. Mojon, “Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 1, pp. 131–137, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Sinthanayothin, J. F. Boyce, T. H. Williamson et al., “Automated detection of diabetic retinopathy on digital fundus images,” Diabetic Medicine, vol. 19, no. 2, pp. 105–112, 2002. View at Publisher · View at Google Scholar · View at Scopus
  13. J. V. B. Soares, J. J. G. Leandro, R. M. Cesar Jr., H. F. Jelinek, and M. J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1214–1222, 2006. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501–509, 2004. View at Publisher · View at Google Scholar · View at Scopus
  15. X. You, Q. Peng, Y. Yuan, Y.-M. Cheung, and J. Lei, “Segmentation of retinal blood vessels using the radial projection and semi-supervised approach,” Pattern Recognition, vol. 44, no. 10-11, pp. 2314–2324, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. B. Zhang, L. Zhang, and F. Karray, “Retinal vessel extraction by matched filter with first-order derivative of gaussian,” Computers in Biology and Medicine, vol. 40, no. 4, pp. 438–445, 2010. View at Publisher · View at Google Scholar · View at Scopus
  17. M. E. Martinez-Perez, A. Hughes, A. Stanton, S. Thom, A. Bharath, and K. Parker, “Scale-space analysis for the characterisation of retinal blood vessels,” in Medical Image Computing and Computer-Assisted Intervention—MICCAI99, C. Taylor and A. Colchester, Eds., p. 9097, 1999. View at Google Scholar
  18. F. Zana and J.-C. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Transactions on Image Processing, vol. 10, no. 7, pp. 1010–1019, 2001. View at Publisher · View at Google Scholar · View at Scopus
  19. D. Marín, A. Aquino, M. E. Gegúndez-Arias, and J. M. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Transactions on Medical Imaging, vol. 30, no. 1, pp. 146–158, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” in Medical Imaging, pp. 648–656, 2004. View at Google Scholar
  21. M. M. Fraz, P. Remagnino, A. Hoppe et al., “An ensemble classification-based approach applied to retinal blood vessel segmentation,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 9, pp. 2538–2548, 2012. View at Publisher · View at Google Scholar · View at Scopus
  22. M. M. Fraz, A. R. Rudnicka, C. G. Owen, and S. A. Barman, “Delineation of blood vessels in pediatric retinal images using decision trees-based ensemble classification,” International Journal of Computer Assisted Radiology and Surgery, vol. 9, no. 5, pp. 795–811, 2014. View at Publisher · View at Google Scholar · View at Scopus
  23. C. A. Lupaşcu, D. Tegolo, and E. Trucco, “FABC: retinal vessel segmentation using AdaBoost,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 5, pp. 1267–1274, 2010. View at Publisher · View at Google Scholar · View at Scopus
  24. E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Transactions on Medical Imaging, vol. 26, no. 10, pp. 1357–1365, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. Z. L. Szpak and J. R. Tapamo, “Automatic and interactive retinal vessel segmentation,” South African Computer Journal, vol. 40, pp. 23–30, 2008. View at Google Scholar
  26. M. Vlachos and E. Dermatas, “Multi-scale retinal vessel segmentation using line tracking,” Computerized Medical Imaging and Graphics, vol. 34, no. 3, pp. 213–227, 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. Y. Wang, G. Ji, P. Lin, and E. Trucco, “Retinal vessel segmentation using multiwavelet kernels and multiscale hierarchical decomposition,” Pattern Recognition, vol. 46, no. 8, pp. 2117–2133, 2013. View at Publisher · View at Google Scholar · View at Scopus
  28. A. M. Mendonça and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1200–1213, 2006. View at Publisher · View at Google Scholar · View at Scopus
  29. Z. Xiao, M. Adel, and S. Bourennane, “Bayesian method with spatial constraint for retinal vessel segmentation,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 401413, 9 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  30. Y. Yin, M. Adel, and S. Bourennane, “Automatic segmentation and measurement of vasculature in retinal fundus images using probabilistic formulation,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 260410, 16 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  31. C. A. Lupascu and D. Tegolo, “Automatic unsupervised segmentation of retinal vessels using self-organizing maps and k-means clustering,” in Computational Intelligence Methods for Bioinformatics and Biostatistics, pp. 263–274, Springer, Berlin, Germany, 2011. View at Google Scholar
  32. C. A. Lupacu and D. Tegolo, “Stable automatic unsupervised segmen tation of retinal vessels using self-organizing maps and a modified fuzzy C-means clustering,” in Fuzzy Logic and Applications, vol. 6857 of Lecture Notes in Computer Science, pp. 244–252, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar
  33. V. M. Saffarzadeh, A. Osareh, and B. Shadgar, “Vessel segmentation in retinal images using multi-scale line operator and K-means clustering,” Journal of Medical Signals and Sensors, vol. 4, no. 2, p. 122, 2014. View at Google Scholar
  34. A. W. Setiawan, T. R. Mengko, O. S. Santoso, and A. B. Suksmono, “Color retinal image enhancement using CLAHE,” in Proceedings of the International Conference on ICT for Smart Society (ICISS '13), pp. 215–217, June 2013. View at Publisher · View at Google Scholar · View at Scopus
  35. P. Kovesi, “Image features from phase congruency,” Videre: Journal of Computer Vision Research, vol. 1, no. 3, pp. 1–26, 1999. View at Google Scholar
  36. P. Kovesi, “Phase congruency detects corners and edges,” in Proceedings of the Australian Pattern Recognition Society Conference, pp. 309–318, DICTA, 2003.
  37. M. A. Amin and Y. Hong, “High speed detection of retinal blood vessels in fundus image using phase congruency,” Soft Computing, vol. 15, no. 6, pp. 1217–1230, 2011. View at Publisher · View at Google Scholar · View at Scopus
  38. M. R. N. Tagore, G. B. Kande, E. V. K. Rao, and B. P. Rao, “Segmentation of retinal vasculature using phase congruency and hierarchical clustering,” in Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI '13), pp. 361–366, Mysore, India, August 2013. View at Publisher · View at Google Scholar · View at Scopus
  39. T. Zhu, “Fourier cross-sectional profile for vessel detection on retinal images,” Computerized Medical Imaging and Graphics, vol. 34, no. 3, pp. 203–212, 2010. View at Publisher · View at Google Scholar · View at Scopus
  40. N. Otsu, “A threshold selection method from gray level histograms,” IEEE Transactions on Systems, Man and Cybernetics, vol. 9, no. 1, pp. 62–66, 1979. View at Publisher · View at Google Scholar · View at Scopus
  41. Research Section, Digital Retinal Image for Vessel Extraction (DRIVE) Database, University Medical Center Utrecht, Image Sciences Institute, Utrecht, The Netherlands, 2013, http://www.isi.uu.nl/Research/Databases/DRIVE.
  42. M. U. Akram and S. A. Khan, “Multilayered thresholding-based blood vessel segmentation for screening of diabetic retinopathy,” Engineering with Computers, vol. 29, no. 2, pp. 165–173, 2013. View at Publisher · View at Google Scholar · View at Scopus