Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2016, Article ID 7906165, 15 pages
Research Article

An Automatic Cognitive Graph-Based Segmentation for Detection of Blood Vessels in Retinal Images

Masdar Institute of Science and Technology, Abu Dhabi, UAE

Received 28 February 2016; Revised 10 April 2016; Accepted 21 April 2016

Academic Editor: Daniel Zaldivar

Copyright © 2016 Rasha Al Shehhi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper presents a hierarchical graph-based segmentation for blood vessel detection in digital retinal images. This segmentation employs some of perceptual Gestalt principles: similarity, closure, continuity, and proximity to merge segments into coherent connected vessel-like patterns. The integration of Gestalt principles is based on object-based features (e.g., color and black top-hat (BTH) morphology and context) and graph-analysis algorithms (e.g., Dijkstra path). The segmentation framework consists of two main steps: preprocessing and multiscale graph-based segmentation. Preprocessing is to enhance lighting condition, due to low illumination contrast, and to construct necessary features to enhance vessel structure due to sensitivity of vessel patterns to multiscale/multiorientation structure. Graph-based segmentation is to decrease computational processing required for region of interest into most semantic objects. The segmentation was evaluated on three publicly available datasets. Experimental results show that preprocessing stage achieves better results compared to state-of-the-art enhancement methods. The performance of the proposed graph-based segmentation is found to be consistent and comparable to other existing methods, with improved capability of detecting small/thin vessels.

1. Introduction

Retinal vessel segmentation is a crucial step in analyzing fundus images of the eye for detection and diagnosis of many eye diseases. Some diseases such as glaucoma, diabetic retinopathy, and macular degeneration are very serious and might lead to blindness if they are not detected in time [1, 2]. The information about blood vessels, such as tortuosity and branching patterns, can not only provide information on pathological changes but also help to grade the disease severity and automatically diagnose the disease.

Although retinal vessel segmentation has been widely studied, it is still a challenging problem because of three main reasons. First, the quality of retinal images is highly variable and the segmentation methods face the challenge of low contrast or high homogeneity of illumination conditions [36]. Second, the complexity of vascular structures (different scales and orientations) means that most existing methods find it difficult to enhance multiscale vessel-like structures with various linear orientations [512]. Third, finding the most optimum model or method which is the most appropriate for variety of data is very difficult [1315].

The studies of retinal images can be classified into pattern recognition (machine learning/model-based) [6, 16, 17], mathematical morphology [6, 18], kernel-based analysis [10, 11, 17] and tracking-based/path-based (Artificial Intelligent, AI) methods [5, 19, 20]. Here, morphological and AI methods are further discussed because they are more related to presented work in this paper.

Morphological methods examine the geometric vessel-like structure of retinal image by probing it with small patterns called structuring elements (SE) of predefined size and shape. Due to sensitivity of vessel-like patterns to different scales and orientations, most methods use multiscale or/and multiorientation structuring elements [18, 21, 22], such as multistructure morphological operators [8, 12], and multiscale white top-hat with linear structuring elements [9]. One of the challenges is that there are several structures in retinal images such as optical disk, exudates, microaneurysms, and hemorrhages, which degrade the performance of vessel detection methods. To overcome this problem, a number of approaches have been proposed in order to decompose components in retinal images. In [6], Morphological Component Analysis (MCA) has been proposed to separate some components such as lesions from vessels.

Tracking-based/path-based methods use regional information (a single vessel rather than the entire vasculature) to find the shortest/cheapest path that matches a vessel profile. The main advantage of this approach is that it provides precise vessel width, unlike other methods. Nowadays, there are many studies which follow this approach, for example, Dijkstra shortest path for vessel patterns [19], graph-cut [5], Bayesian-based tracking [20], and graph-analysis [23].

In this work, we propose a perceptual graph-based segmentation method. The complete framework consists of two stages. The first stage (preprocessing) removes noise as well as unwanted regions such as optical disk and surrounded darker background and produces a higher contrast vessel image. The second stage (segmentation) converts an image into connected graphical layer where each pixel is presented as a node and its spatial/spectral properties are used to merge pixels (nodes) to construct more semantic objects in higher-connected graphical layers. Gestalt perceptual principles, that is, similarity, closure, continuity, and proximity of spatial/spectral properties of nodes, are employed to assemble smaller parts that are most likely to represent a coherent connected vessel-like pattern.

The experimental evaluation is carried out to test the behavior of segmentation algorithm using the standard datasets such as DRIVE, ARIA, and STARE (details of these datasets are given in Section 4.1) using the following major criteria: sensitivity (Se), specificity (Sp), accuracy (Acc), and area under curve (AUC).

In the following, the idea of perceptual Gestalt principles in image segmentation is introduced in Section 2. Section 3 illustrates the proposed hierarchical graph-based segmentation framework in detail. First, it presents the preprocessing part, which includes filtering-based inhomogeneity correction using Gaussian filter, followed by morphology-based illumination enhancement method. Second, it shows the segmentation part, which is based on integrating the idea of perceptual Gestalt principles into object-based features to merge segments. Then, Section 4 presents the datasets, experimental metrics, and results of segmentation. Finally, the conclusions are drawn and some ideas for future work are presented in Section 5.

2. Contribution of Current Work

The main contribution of this work is to introduce perceptual Gestalt (form, grouping) principles [2428] of some middle-level image features into graph-based segmentation to discriminate the connected coherent vessel-like patterns from background.

Four Gestalt principles are proposed, inspired from similarity, closure, continuity, and proximity. The theory behind Gestalt grouping is based on how human vision performs perceptual grouping to assemble parts of an image that most likely represent a single object in the scene. Similarity can be used to group segments into one object depending on the number of similar factors such as color, size, and shape. Proximity rule is employed by assembling different parts which are close to each other to present one object. Good continuity is the tendency of elements to be grouped to form smooth contours depending on the number of factors such as orientation of local elements, contour length, and curvature properties. The principle of closure refers to tendency to see an element/object as a complete form or figure, ignoring gaps and incomplete contour lines. It is not necessary to create triangles, circle, and so forth, but it fills the missing information to create familiar shapes [25, 26, 28].

The integration of perceptual Gestalt principles into graph-based segmentation, from computational view, is an important stage to reduce required visual processes to interpret an input image by converting a fully connected layer into locally connected layer [29]. Moreover, such integration helps to cope with undersegmentation/oversegmentation in image layer(s) or between layers.

In this work, the perceptual principles are employed as follows. The first level, defined as color-layer, is built by grouping pixels based on similarity of color between each pixel and its 8-connected neighborhood (Gestalt similarity in illumination characteristics). The second layer, called as black top-hat (BTH) layer, is constructed by grouping adjacent objects, which most likely represent vessel-like shape after applying BTH morphological operator (Gestalt closure in connected components of most common label in BTH-layer). The final Dijkstra-layer is created by conjunction of adjacent objects which have a high probability of constructing connected objects (Gestalt continuity in Dijkstra tracking path). Gestalt proximity is employed by considering 8-connected neighborhood as a Gestalt connectivity patch in all layers.

3. The Proposed Method

The proposed framework (Figure 1) comprises two major stages: preprocessing and segmentation. Preprocessing (green rectangles: A–C) consists of filtering-based inhomogeneity correction and morphology-based illumination enhancement (Section 3.1). Hierarchical graph-based segmentation (red rectangles: 1D–3D) is based on construction of five layers based on the number of objects in retinal images: original RGB image (largest), ROI (region of interest), color, followed by black top-hat (BTH), and finally Dijkstra-layers (smallest) (Section 3.2).

Figure 1: Framework of perceptual hierarchical graph-based segmentation. Preprocessing stage is presented in green rectangles. Multiscale graph-based segmentation is shown in red rectangles.
3.1. Preprocessing

Preprocessing involves two main steps to produce more effective feature image which shows a high contrast between vessel and nonvessel objects to facilitate the segmentation. The first step is to remove the effect of background variations by nonuniform illumination and the second step is to eliminate the complexity of vascular structure because of multiple scales and orientations.

In this work, we choose the green channel image for preprocessing, as it exhibits the best vessel/no-vessel contrast in retinal images. The red channel can be saturated and blue channel has poor dynamic ranges [16]. In addition, using only the green channel decreases the computational time compared to processing all RGB channels. We first extract the region of interest (ROI) representing the fundus, which is a circle-like shape/nonblack region at the center of retinal images without including the surrounding black background. We eliminate the influence of the background by masking it to accelerate further processing stages by focusing only on pixels/objects in ROIs. To find ROI, simple minimum threshold for all red, green, and blue channels is applied to remove unwanted background from RGB channel. However, some missed labeled pixels are created on retinal foreground and background. These noises are eliminated by morphological erosion [3032], where ROI is shrunk into center of retinal images (Figure 1(A)).

3.1.1. Filtering-Based Inhomogeneity Correction (Figure 1(B))

Due to inhomogeneous light conditions, retinal images may contain background (nonvessels) with high similarity to foreground (vessels), which will degrade the performance of the segmentation method. Therefore, it is important to remove effects of the varying illumination conditions. Zhao et al. [5] applied Retinex theory, adapted from the field of computer vision, to remove unwanted illumination effects by component-wise multiplication of reflected green with green illumination channel (). is component-wise log-subtraction bilateral blurring of green channel from green channel (). Imani et al. [6] used reflectance component of Retinex theory to reduce illumination differences, but with component-wise subtraction of median blurring of green channel. Some other studies used Contrast-Limited Adaptive Histogram Equalization (CLAHE) method instead of global enhancement methods such as histogram equalization and gamma correction; however, its local enhancement is uniform regardless of whether it is foreground or background [3, 4].

In this work, we use the component-wise subtraction of background from the selected channel to remove the light variations. Let be a pixel of a green channel. The retinal image background () is created by applying low-pass Gaussian blurring of green channel () with a reasonable filter size (e.g., ) to eliminate the effect of the brightest region from retinal images (optical disk). The corrected image () is computed by subtracting from . This is given as

Figure 2 illustrates a comparison between previously mentioned correction methods using selected examples from DRIVE, ARIA, and STARE datasets. In general, all methods enhance image contrast, but there are still large areas of homogeneity. Component-wise subtraction of Gaussian blurring is not the best method; however, it has succeeded in eliminating the most noisy part (optical disk) and enhances the contrast between vessels and background. This can be especially noticed in ARIA and STARE images after applying morphology enhancement, as depicted in Figure 4. As a consequence, the vessels can be easily identified from the background. Component-wise correction of bilateral filtering has comparable results after applying morphology enhancement, because it is an edge-preserving smoothing filter that maintains edges of vessels [33].

Figure 2: Inhomogeneity correction results on selected images. (a) shows green channel of randomly chosen images from the DRIVE training, DRIVE testing, ARIA, and STARE datasets, respectively. (b) shows CLAHE results [3, 4]. (c) shows results of Retinex theory on green channel [5]. (d) presents difference between green channel and median blurring of green channel [6]. (e) presents results of our correction method.
3.1.2. Morphology-Based Illumination Enhancement (Figure 1(C))

Mathematical morphology is a nonlinear method which uses the concepts of set, topology, and geometry to analyze geometrical structures (e.g., shape and form of objects) in images. It examines the geometric structure of an image by probing with structuring elements (B) [3032].

In this work, black top-hat (BTH) morphological operator is used. This is because the BTH operator is the most applicable method to extract image structure under lower illumination conditions [30], which is the case in blood vessel detection application. The BTH is obtained by subtracting morphological closing image of corrected image from corrected image , as in (2). The morphological closing , which is dilation followed by erosion , acts as a shape filter and preserves objects having relevant structure in image (3). This is given asAll vessel-like patterns are components of linear-shape structures with various horizontal, vertical, and diagonal orientations. To address these differences, we suggest the following approaches.

(i) Multiscale Multiorientation BTH. The BTH is adaptively computed by probing the corrected image with multiple linear structuring elements which have a variety of angular orientations. Therefore, a set of linear structuring elements, where each is a matrix representing a line with , , , , and pixels of length and rotated in steps of , is used for morphological BTH. As a result, each isolated BTH along each direction will brighten the vessels in that direction, providing that the length of is large enough to extract the vessel with largest diameter. Finally, we consider the average of the first three maximum BTH results because the first three BTHs present the highest differences between vessel/nonvessel patterns, as the following:where and are scale and orientation of linear structuring elements B.

(ii) Multiscale BTH. Multiscale BTH is defined as an average of morphological BTHs obtained from probing corrected image with size of , , , , and of elliptical-structuring elements B, unlike the previous method which uses linear structuring elements in different directions, given byIn this paper, two other enhancement methods are selected for comparison: phase-based method [5] and wavelet-based method [10]. In order to reproduce results, the parameters used for these filters are the same as recommended in the corresponding literature [5, 10]. Figure 3 shows the results of applying multiscale BTH, multiscale multiorientation BTH, phase-based method, and wavelet-based method. It can be seen clearly that illumination contrast using wavelet-based method is poor compared to phase-based method and the proposed method, where the vessel-like structure is more distinguishable. However, multiscale BTH method produces more consistent results at optical disk and foveal areas compared to other parts of the vessels. Figure 4 shows the result of applying multiscale BTH and multiscale multiorientation BTH to different correction methods, that is, CLASHA [3, 4], Retinex [5], difference of green and its median blurring [6], and proposed correction method. In summary, it seems that applying morphological operators over all correction images successfully enhances the contrast of vessels. The results of multiscale BTH over proposed correction method seem to be efficient in removing inhomogeneity within the image including optical disk and fovea regions.

Figure 3: Illustrative comparison of enhancement effects in selected images from the DRIVE training, DRIVE testing, ARIA, and STARE images. The green channels of selected images are presented in (a). (b) shows the results of applying multiscale BTH. Results of multiscale multiorientation BTH are presented in (c). (d) shows local-phased enhancement after applying Retinex to green channels [5]. (e) shows results of applying wavelet-based enhancement results [10].
Figure 4: Enhancement results produced by applying multiscale BTH ((a)–(e)) and BTH of multiscale multiorientation structuring elements ((f)–(j)) to CLAHE channel ((b) and (g)) of green channel [3, 4], Retinex channel ((c) and (h)) [5], median channel ((d) and (i)) [6], and proposed method ((e) and (j)).
3.2. Hierarchical Graph-Based Segmentation (Figure 1(D))

In this paper, a cognitive vision approach to graph-based image segmentation is proposed by employing perceptual knowledge of contextual features to provide semantically vessel-like patterns. Moreover, it decreases the required computational cost by processing perceptual features instead of processing the fully connected image in blood vessel applications.

Graph-based segmentation represents the input image as a weighted graph , where all vertices/nodes correspond to a pixel/region of the image and edges correspond to pair of neighboring vertices/nodes. Each edge has a corresponding weight which represents the dissimilarity between adjacent vertices/nodes and . The dissimilarity between edges is often defined based on selected properties relevant to the application (e.g., similarity of color and shape) [3436].

In this work, we build an undirected graph with multilevel layers, where the number of vertices/nodes is hierarchically reduced from top-layer to the next bottom-layer because nodes are merged to construct objects with more semantic interpretation from visual perception view. There are two most important questions in graph-based algorithms to be considered in image segmentation. First is the weighting function that presents spectral or/and spatial relationship between adjacent nodes. Second is the merging/homogeneity criteria to group adjacent vertices/nodes to have connected components. The graph starts with color-layer and BTH-layer, followed by Dijkstra-layer. The design of each layer is illustrated in the following sections.

3.2.1. Gestalt Similarity and Proximity in Contextual Color Features (Figure 1(1D))

In order to build spectral level, the graph-based framework proposed in [35] is applied. It translates Gaussian smoothed input image into a graph, where each pixel is mapped to vertice/node and each edge reflects spectral relationships between adjacent pixels. We consider 8-connected neighborhood as a Gestalt connectivity patch. The initial weighting function is Euclidean distance between red, green, and blue components between two adjacent vertices/nodes and . This is given asThe spectral vertices/nodes are hierarchically merged based on degree of spectral similarity (for more details of the algorithm, read from [35]). The output of spectral-grouping is not robust enough to perceptually interpret resulting segmented regions as semantic components, as depicted in Figure 5(c). High-level criteria may efficiently enhance results by grouping/splitting components into more meaningful spatial structures starting from spectral components instead of pixels. Therefore, grouping in higher layers is similar to the spectral-layer, but with other weighting and merging criteria for different measures of similarity. At each stage, components are merged iteratively from small to large until convergence (no more merging). The stop condition is based on prior knowledge and it is used to prevent oversegmentation/undersegmentation.

Figure 5: Demonstrative results of each stage of applying graph-based segmentation. (a) shows some selected images from the DRIVE training, DRIVE testing, ARIA, and STARE datasets. Gold standards are shown in (b). (c) presents spectral-layer. The BTH-layer after applying graph-cut to vessel-layer (multiscale BTH) is illustrated in (d). (e) shows the final-layer after applying Dijkstra path.
3.2.2. Gestalt Closure and Proximity Based on Contextual BTH Features (Figure 1(2D))

A sequential connected component labeling approach [37, 38] is employed to separate vessel-like components from background components by scanning all pixels in vessel-layer and consequently labeling each connected component. For convenience, we consider the case of 8-connected neighborhood (Gestalt proximity patch). The suggested labeling threshold is based on ratio of mean average over standard deviation average of vessel-layer with 10 homogeneous pixels as the minimum number of pixels in one component.

We consider the labeled-component with highest number of pixels as background (label 0) and other components as foreground components (labels 1, 2, 3, 4, 5, etc.). On the other hand, nonmasked area is typically left unchanged by the labeling process. As a result, background label () is used to cut edges of spectral components or vertices from spectral components (see (7)) to build BTH-layer where the number of nodes in this layer is lower than the number of nodes in spectral-layer, as depicted in Figure 5(d):

3.2.3. Gestalt Continuity and Proximity Based on Contextual Features (Figure 1(3D))

The BTH-layer is updated by assigning a new weight for edges between BTH-spectral connected components. The weight is based on the smallest Euclidean distance between vertices of all adjacent components.

The development of our continuity feature is motivated by the need for a representation of long contour representation suitable to visually perceive vessel-like patterns as continuous irregular lines. In order to apply continuity principle, Dijkstra algorithm [19, 39] is employed within window size by considering each first component in BTH-layer block as a source point of the graph-path and each furthest component in window as target point of graph-path. All vertices in Dijkstra path from source to target are iteratively merged to be one component, until convergence of vessel and nonvessel regions (Figure 5(e)).

Figure 5 shows some examples from standard datasets. The spectral segmentation is not sufficient to obtain meaningful structures, because lighting contrast of vessel-like/non-vessel-like patterns is low. Therefore, the results are with undersegmented/oversegmented regions, so higher-level criteria should be used efficiently to group or split components into more meaningful spatial structures (BTH). This shows the importance of using shape features to eliminate wrongly labeled vessel-like patterns. On the other hand, other properties such as asymmetry and length/width [40] help to decrease the number of nodes in BTH-layer; however, they are not enough to find connectedness of nonadjacent nodes in BTH-layer.

As a result, Graph-based characteristics (Dijkstra algorithm) are introduced in image segmentation to determine connectedness of nonneighbor nodes, which can be integrated to build a complete contour. Table 1 presents the number of components in vessel multilayer graph, starting from initial green channel, ROI image, and BTH and Dijkstra-layers.

Table 1: Illustrative study of the number of components from fully connected to locally connected layer of selected examples from the DRIVE training, DRIVE testing, ARIA, and STARE datasets.

4. Experimental Evaluation

We have employed three public retinal image datasets to evaluate the proposed segmentation framework. In this section, a brief introduction to these datasets is provided in Section 4.1; evaluation criteria including preprocessing and segmentation criteria are defined in Section 4.2, followed by experimental results in Section 4.3.

4.1. Data

We obtained human retinal images from publicly available datasets: DRIVE, ARIA, and STARE. All datasets consist of RGB components of retinal images with their corresponding ground truth images where blood vessel-like structures are segmented. These datasets are selected because of availability of gold standard from manual annotations of retinal vessels by experts.

DRIVE (Digital Retinal Images for Vessel Extraction). It consists of training and testing image with 565 × 584 pixels, which are obtained from a diabetic retinopathy screening program in the Netherlands. The set of 40 photographic images have been randomly selected, 33 do not show any sign of diabetic retinopathy, and 7 show signs of mild early diabetic retinopathy. The manual segmentation of set A is used as ground truth. The DRIVE dataset is available at

ARIA (Automated Retinal Image Analysis). It consists of three groups: 92 images of age-related muscular degeneration, 59 images of patients with diabetes, and 61 images of healthy eyes, collected by St. Paul’s Eye Unit and University of Liverpool. Each image was captured at a resolution of 768 × 576 pixels. The manual segmentation from observer DGP is used as ground truth. The ARIA dataset is available at

STARE (STructural Analysis for the Retina). It consists of 20 images with 10 images showing evidence of pathology, obtained from University of California. Each image was captured by 8 bits per plane at pixels. Hand-labeled vessel networks, provided by Adam Hoover (1st manual) and Valentina Kouznetsova (2nd manual), are used as ground truth. The STARE datasets are available at

4.2. Evaluation Measures
4.2.1. Preprocessing Measures

In order to evaluate contrast enhancement, several objective methods are proposed: Contrast Improvement Index (CII), Peak Signal-to-Noise Ratio (PSNR), and Mean Structural Similarity () [8, 45, 46]. Th CII is ratio of contrast of the enhanced and original images. CII is computed as follows:where presents the contrast between foreground (vessel) and background (retinal regions except vessels). and are mean gray-level value of foreground and background, respectively. The larger and consequently the larger , the more obvious the difference between foreground and background (higher contrast and better enhancement). The PSNR measures intensity changes of original and enhanced images based on Mean Square Error (MSE), as follows:To compute , the image is decomposed into blocks of size and, for each blok, SSIM is computed as follows:where , , , and are means and standard deviations in original and enhanced blocks. corresponds to covariance measure, , and .

4.2.2. Segmentation Measures

Four common metrics are employed to measure the performance of the proposed segmentation: sensitivity (Se) or True Positive Rate (TPR), specificity (Sp) or False Positive Rate (FPR), accuracy (Acc), and area under curve (AUC). Sensitivity indicates the proportion of vessel pixels identified by the proposed method that coincide with vessel pixels in ground truth images (11), while specificity is the proportion of detected pixels by the proposed method that are not vessels in ground truth images (12). Accuracy indicates the proportion of vessel/nonvessel patterns that are correctly identified in terms of the total number of pixels in retinal images (13). ROC is obtained by plotting the fraction of Se/TPR versus Sp/FPR (14). The closer the curve approaches the top left corner with AUC close to 1, the better the performance of the proposed method is [47]: where is defined as the number of vessel pixels correctly detected in the retinal images, is defined as the number of nonvessel pixels detected as vessels, is the number of nonvessel pixels correctly detected, and is the number of vessel pixels detected as nonvessel pixels.

4.3. Results

In order to evaluate the efficiency of the proposed segmentation method, we use two approaches. First, each individual stage of the proposed method is evaluated by its comparable stages of other methods considered in this study (i.e., inhomogeneity correction and illumination enhancement) across all three datasets. Second, the performance of the proposed segmentation is compared with other works across DRIVE and STARE datasets.

4.3.1. Inhomogeneity Correction Assessment

The optical disk and fovea area contribute to most of the false detection of vessel pixels in most blood vessel detection frameworks [9, 10, 48]. Subtracting green channel from low-pass Gaussian blurring of green channel increases the contrast between optical disk/foveal area and vessel pixels. Therefore, the noisy area will not be enhanced after morphology enhancement, especially after multiscale BTH. Figure 6 presents an example of applying different correction methods, mentioned in Section 3.1.1, to one of the images from DRIVE testing dataset. Our correction method (Figure 6(e)) shows a superior performance for the vessel subtraction and is more similar to ground truth image, compared to other methods. Table 2 shows that the accuracy of our correction method is the highest in all three datasets.

Table 2: Graph-based segmentation performance without and with correction methods on DRIVE, ARIA, and STARE datasets, respectively. 2nd row: CLAHE () [3, 4, 41]. 3rd row: Retinex (, spatial spread based on low-pass filter , and geometric spread of the image intensity ) [5]. 4th row: subtraction median blurring image of green () from green image [6]. 5th row: subtraction low-pass Gaussian blurred image of green () from green image. Se: sensitivity, Sp: specificity, Acc: accuracy, and AUC: area under curve.
Figure 6: Demonstrative comparison of correction methods on one of the DRIVE testing images. The first row shows the results of inhomogeneity correction method and its consequence final segmentation in the second row. (a) RGB image and 1st manual. (b) Green channel without correction. (c) CLAHE correction. (d) Retinex correction. (e) Difference between green and median blurring of green channels. (f) Difference between green and Gaussian blurring of green channel.
4.3.2. Illumination Enhancement Assessment

Figure 7 shows an example of applying enhancement methods to an image selected randomly from DRIVE testing dataset. The results of multiscale BTH outperform other enhancement methods. The false detection of segmentation method after applying multiscale multiorientation BTH is because it fails to remove optical disk and some exudates. Optic disk area, which can be seen as brightest, round, vertically slightly oval disk, can be easily preserved by multiscale multiorientation BTH, which identifies line-shape structures in vertical, horizontal, or diagonal angularity with varying sizes. Moreover, exudates can be identified as areas with varying size and shapes, where it is difficult to be eliminated by multiscale multiorientation BTH. In contrast, multiscale BTH maintains just all ellipse-shape structures regardless of their orientations and then finds magnitude of directional blurring of BTH. As depicted in Table 3, the accuracy of multiscale BTH is above in all datasets and consequently their AUC values are most closer to 1.

Table 3: Graph-based segmentation performance of four enhancement methods: local phase-based [5], wavelet-based [10], multiscale multiorientation BTH, and multiscale BTH methods on DRIVE, ARIA, and STARE datasets, respectively. Se: sensitivity, Sp: specificity, Acc: accuracy, and AUC: area under curve.
Figure 7: Comparison between illumination enhancement methods and their consequent graph-based segmentation results on one of the selected images from DRIVE testing. (a) Green channels of selected image and their 1st-manual images. (b) Local-phase-based enhancement after applying Retinex to green channel. (c) Wavelet-based enhancement method. (d) Multiscale multiorientation BTH. (e) Multiscale BTH.

The suggested preprocessing enhancement method is tested by combining it with correction methods and comparing it with other enhancement methods, that is, local-phased and wavelet-based methods, in terms of the following criteria: , , and (Table 4). The CII and PSNR of multiscale BTH with low-pass Gaussian blurring correction are relatively largest. On the other hand, of both median and Gaussian corrections are comparable. The reason is that both filtering methods succeed in efficiently removing noisy areas by filtering operators of large size.

Table 4: Illustrative comparison between all possible combinations of inhomogeneity correction and illumination enhancement methods. 1st row: local phase-based method [5]. 2nd row: wavelet-based method [10]. 3rd/7th row: CLAHE correction [3, 4, 41] with multiscale multiorientation BTH/multiscale BTH. 4th/8th row: retinex correction [5] with multiscale multiorientation BTH/multiscale BTH. 5th/9th row: median blurring correction [6] with multiscale multiorientation BTH/multiscale BTH. 6th/10th row: low-pass Gaussian blurring correction with multiscale multiorientation BTH/multiscale BTH.
4.3.3. Comparison between Proposed Segmentation and State-of-the-Art Methods

The segmentation performance of the proposed method in terms of sensitivity, specificity, accuracy, and area under curve is compared with other state-of-the-art methods (matched filtering, supervised, unsupervised, and artificial methods) in the most public datasets: DRIVE and STARE, as depicted in Table 5. ARIVE dataset is not used because only a few studies used ARIVE dataset. The comparison shows that the proposed method is comparable to most of blood vessel detection frameworks. The slightly lower value is due to the fact that the proposed segmentation shows smaller and thinner vascular structures which are not generally annotated in the ground truth images. This can be clearly seen in Figure 8 which shows examples from STARE and ARIA datasets where the proposed segmentation method succeeds in detecting small and thin vessel-like patterns. It is also due to overestimated vessel width, which is a result of initial spectral segmentation (Section 3.2.1), ignoring all spatial properties to build initial regions.

Table 5: Performance of segmentation methods based on sensitivity (Se), specificity (Sp), accuracy (Acc), and area under curve (AUC) on DRIVE and STARE datasets. The hand segmented images from first-manual observers are used as benchmarks (1st STARE manual is selected because all works used it).
Figure 8: Illustrative comparison between results obtained from manual observers and the proposed segmentation on selected sample from STARE and ARIA datasets. (a) Selected sample. (b) 2nd manual sample (selected because it presents most small/thin blood vessels). (c) Segmented sample.

5. Discussion and Conclusion

In this paper, we proposed a new method to detect blood vessels in fundus images, which is based on three main steps: filtering-based correction, morphological-illumination enhancement, and graph-based segmentation.

Low-pass Gaussian blurring (filtering-based correction) is suggested to remove most noisy areas in retinal images, that is, optical disk and fovea. This correction method may not be the best method; however, it has succeeded in detecting the most noisy areas from retinal images.

Due to high sensitivity of multistructure elements to edges in all directions, multiscale/multiorientation and multiscale morphological BTHs (morphological-illumination enhancement) are proposed which are capable of detecting most of the blood vessel edges which are thin and small. The deficiency of missing some thin vessels is due to the minimum threshold of connected component labeling of BTH-layer. Therefore, the need for appropriate thresholding method to find thin vessel and avoid false-edge pixel is important. Hence, one of our future works is to find more suitable connected component thresholding method to increase the accuracy of the proposed method.

Hierarchical graph-based segmentation is based on applying perceptual Gestalt principles (e.g., similarity, closure, proximity, and continuity) of spectral/spatial features between nodes in graphical multilayers. For instance, similarity of spectral characteristics between adjacent nodes is used to group small nodes. These characteristics are informative; however, semantic ambiguity still exists because of the appearance, shape, or other higher-level features’ similarity. Moreover, closure principle is applied by combining nodes if their connected component labels are not part of the common connected component label of BTH-layer. From authors’ viewpoint, there are other laws behind this closure principle which are similar in size and similar in orientation because BTH operator preserves multistructure which has variety in size and orientation.

This work solved the problem of the priority of the highest importance principle, unlike other works [28]. The major law in this work is proximity (connectedness). This is because the adjacent nodes in each layer are taken into account to make vessel-like pattern stand out from its background as a separate object by grouping small subobjects together with their surroundings based on other principles.

The quantitative performance of enhancement as well as segmentation shows that the proposed method works well on healthy retinal image with accuracy . However, a main drawback is that it tends to produce false detection in overlapping areas. This has lowered the overall accuracy of the proposed method, especially on STARE images. To solve this problem, two-label graph-cut step will help to reduce incorrect detection and improve performance of the proposed segmentation.

In this paper, the proposed method is objectively evaluated. From the view of cognitive psychology, subjective evaluation is an efficient method to test the performance of perceptual segmentation. Therefore, we aim to investigate human visual perception for segmenting and consequently detecting blood vessels (new ground truth) and comparing with segmentation results.

Algorithm run-time is an issue of concern to assess the algorithm performance. In this paper, the computation time of the proposed method is 10 minutes and 3 seconds. The preprocessing part takes around 3 s and segmentation part takes about 10 m to analyze graph algorithms. The proposed algorithm is quite slow because of sequential connected component labeling and Dijkstra algorithms in graph-based segmentation. Therefore, in the future, we aim to parallelize connected component labeling and Dijkstra algorithms by translating our graph-based segmentation into parallel segmentation using massively parallel GPU [49].

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. S. J. Lee, C. A. McCarty, H. R. Taylor, and J. E. Keeffe, “Costs of mobile screening for diabetic retinopathy: a practical framework for rural populations,” The Australian Journal of Rural Health, vol. 9, no. 4, pp. 186–192, 2001. View at Publisher · View at Google Scholar · View at Scopus
  2. W. Tasman, A. Patz, J. A. McNamara, R. S. Kaiser, M. T. Trese, and B. T. Smith, “Retinopathy of prematurity: the life of a lifetime disease,” American Journal of Ophthalmology, vol. 141, no. 1, pp. 167–174, 2006. View at Publisher · View at Google Scholar · View at Scopus
  3. A. W. Setiawan, T. Mengko, O. Santoso, and A. Suksmono, “Color retinal image enhancement using clahe,” in Proceedings of the International Conference on ICT for Smart Society (ICISS '13), pp. 1–3, Jakarta, Indonesia, June 2013. View at Publisher · View at Google Scholar
  4. G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Medical Image Analysis, vol. 19, no. 1, pp. 46–57, 2015. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Zhao, Y. Liu, X. Wu, S. P. Harding, and Y. Zheng, “Retinal vessel segmentation: an efficient graph cut approach with retinex and local phase,” PLoS ONE, vol. 10, no. 4, Article ID e0122332, 2015. View at Publisher · View at Google Scholar · View at Scopus
  6. E. Imani, M. Javidi, and H.-R. Pourreza, “Improvement of retinal blood vessel detection using morphological component analysis,” Computer Methods and Programs in Biomedicine, vol. 118, no. 3, pp. 263–279, 2015. View at Publisher · View at Google Scholar · View at Scopus
  7. G. Läthén, J. Jonasson, and M. Borga, “Blood vessel segmentation using multi-scale quadrature filtering,” Pattern Recognition Letters, vol. 31, no. 8, pp. 762–767, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. M. S. Miri and A. Mahloojifar, “Retinal image analysis using curvelet transform and multistructure elements morphology by reconstruction,” IEEE Transactions on Biomedical Engineering, vol. 58, no. 5, pp. 1183–1192, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. M. M. Fraz, S. A. Barman, P. Remagnino et al., “An approach to localize the retinal blood vessels using bit planes and centerline detection,” Computer Methods and Programs in Biomedicine, vol. 108, no. 2, pp. 600–616, 2012. View at Publisher · View at Google Scholar · View at Scopus
  10. P. Bankhead, C. N. Scholfield, J. G. McGeown, and T. M. Curtis, “Fast retinal vessel detection and measurement using wavelets and edge location refinement,” PLoS ONE, vol. 7, no. 3, Article ID e32435, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. C. Wang, N. Komodakis, and N. Paragios, “Markov Random Field modeling, inference and learning in computer vision and image understanding: a survey,” Computer Vision and Image Understanding, vol. 117, no. 11, pp. 1610–1627, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. M. Liao, Y.-Q. Zhao, X.-H. Wang, and P.-S. Dai, “Retinal vessel enhancement based on multi-scale top-hat transformation and histogram fitting stretching,” Optics and Laser Technology, vol. 58, pp. 56–62, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, 2000. View at Publisher · View at Google Scholar · View at Scopus
  14. T. Kauppi, V. Kalesnykiene, J.-K. Kamarainen et al., “The DIARETDB1 diabetic retinopathy database and evaluation protocol,” in Proceedings of the 18th British Machine Vision Conference (BMVC '07), September 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. M. Niemeijer, B. van Ginneken, M. J. Cree et al., “Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs,” IEEE Transactions on Medical Imaging, vol. 29, no. 1, pp. 185–195, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. D. Marín, A. Aquino, M. E. Gegúndez-Arias, and J. M. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Transactions on Medical Imaging, vol. 30, no. 1, pp. 146–158, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. M. M. Fraz, P. Remagnino, A. Hoppe et al., “Blood vessel segmentation methodologies in retinal images—a survey,” Computer Methods and Programs in Biomedicine, vol. 108, no. 1, pp. 407–433, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. N. C. Mithun, S. Das, and S. A. Fattah, “Automated detection of optic disc and blood vessel in retinal image using morphological, edge detection and feature extraction technique,” in Proceedings of the 16th International Conference on Computer and Information Technology (ICCIT '13), pp. 98–102, Khulna, Bangladesh, March 2014. View at Publisher · View at Google Scholar · View at Scopus
  19. R. Estrada, C. Tomasi, M. T. Cabrera, D. K. Wallace, S. F. Freedman, and S. Farsiu, “Exploratory dijkstra forest based automatic vessel segmentation: applications in video indirect ophthalmoscopy (VIO),” Biomedical Optics Express, vol. 3, no. 2, pp. 327–339, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Yin, M. Adel, and S. Bourennane, “Retinal vessel segmentation using a probabilistic tracking method,” Pattern Recognition, vol. 45, no. 4, pp. 1235–1244, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  21. F. Zana and J.-C. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Transactions on Image Processing, vol. 10, no. 7, pp. 1010–1019, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  22. S. M. Zabihi, M. Delgir, and H. R. Pourreza, “Retinal vessel segmentation using color image morphology and local binary patterns,” in Proceedings of the 6th Iranian Conference on Machine Vision and Image Processing (MVIP '10), pp. 1–5, Isfahan, Iran, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. B. Dashtbozorg, A. M. Mendonça, and A. Campilho, “An automatic graph-based approach for artery/vein classification in retinal images,” IEEE Transactions on Image Processing, vol. 23, no. 3, pp. 1073–1083, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  24. K. Koffka, Principles of Gestalt Psychology, Harcourt, New York, NY, USA, 1935.
  25. M. Wertheimer, “Laws of organization in perceptual forms,” in Psychologische Forschung, pp. 71–88, 1938. View at Google Scholar
  26. J. Wagemans, J. H. Elder, M. Kubovy et al., “A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure-ground organization,” Psychological Bulletin, vol. 138, no. 6, pp. 1172–1217, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. A. Richtsfeld, M. Zillich, and M. Vincze, “Implementation of Gestalt principles for object segmentation,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR '12), pp. 1330–1333, Tsukuba, Japan, November 2012. View at Scopus
  28. R. G. Mesquita and C. A. B. Mello, “Segmentation of natural scenes based on visual attention and gestalt grouping laws,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC '13), pp. 4237–4242, Manchester, UK, October 2013. View at Publisher · View at Google Scholar · View at Scopus
  29. D. Pei, Z. Li, R. Ji, and F. Sun, “Efficient semantic image segmentation with multi-class ranking prior,” Computer Vision and Image Understanding, vol. 120, pp. 81–90, 2014. View at Publisher · View at Google Scholar · View at Scopus
  30. P. Soille, Morphological Image Analysis: Principles and Applications, Springer, New York, NY, USA, 2nd edition, 2003.
  31. P. Soille, “Morphological image compositing,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 5, pp. 673–683, 2006. View at Publisher · View at Google Scholar · View at Scopus
  32. J. Cousty, L. Najman, and B. Perret, “Constructive links between some morphological hierarchies on edge-weighted graphs,” in Mathematical Morphology and Its Applications to Signal and Image Processing: 11th International Symposium, ISMM 2013, Uppsala, Sweden, May 27–29, 2013. Proceedings, vol. 7883 of Lecture Notes in Computer Science, pp. 86–97, Springer, Berlin, Germany, 2013. View at Publisher · View at Google Scholar
  33. M. Elad, “Retinex by two bilateral filters,” in Proceedings of the 5th International Conference on Scale Space and PDE Methods in Computer Vision, Scale-Space, vol. 3459, pp. 217–229, April 2005. View at Scopus
  34. M. R. Khokher, A. Ghafoor, and A. M. Siddiqui, “Image segmentation using fuzzy rule based system and graph cuts,” in Proceedings of the 12th International Conference on Control, Automation, Robotics and Vision (ICARCV '12), pp. 1148–1153, Guangzhou, China, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  35. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image segmentation,” International Journal of Computer Vision, vol. 59, no. 2, pp. 167–181, 2004. View at Publisher · View at Google Scholar · View at Scopus
  36. A. Rezvanifar and M. Khosravifard, “Including the size of regions in image segmentation by region-based graph,” IEEE Transactions on Image Processing, vol. 23, no. 2, pp. 635–644, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. M. B. Dillencourt, H. Samet, and M. Tamminen, “A general approach to connected-component labeling for arbitrary image representations,” Journal of the ACM, vol. 39, no. 2, pp. 253–280, 1992. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  38. K. Suzuki, I. Horiba, and N. Sugie, “Linear-time connected-component labeling based on sequential local operations,” Computer Vision and Image Understanding, vol. 89, no. 1, pp. 1–23, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  39. S. J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Pearson Education, 2nd edition, 2003.
  40. Trimbl-Definiens Joint Press Release, Trimble Acquires Definiens—Earth Sciences Business to Expand Its Geospatial Portfolio, 2010.
  41. M. H. A. Fadzil, H. A. Nugroho, H. Nugroho, and I. L. Iznita, “Contrast enhancement of retinal vasculature in digital fundus image,” in Proceedings of the International Conference on Digital Image Processing, pp. 137–141, IEEE, Bangkok, Thailand, March 2009. View at Publisher · View at Google Scholar · View at Scopus
  42. U. T. V. Nguyen, A. Bhuiyan, L. A. F. Park, and K. Ramamohanarao, “An effective retinal blood vessel segmentation method using multi-scale line detection,” Pattern Recognition, vol. 46, no. 3, pp. 703–715, 2013. View at Publisher · View at Google Scholar · View at Scopus
  43. J. I. Orlando and M. Blaschko, “Learning fully-connected CRFs for blood vessel segmentation in retinal images,” in Medical Image Computing and Computer-Assisted Intervention—MICCAI 2014, P. Golland, N. Hata, C. Barillot, J. Hornegger, and R. Howe, Eds., vol. 8673 of Lecture Notes in Computer Science, pp. 634–641, Springer, New York, NY, USA, 2014. View at Publisher · View at Google Scholar
  44. A. Salazar-Gonzalez, D. Kaba, Y. Li, and X. Liu, “Segmentation of the blood vessels and optic disk in retinal images,” IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 6, pp. 1874–1886, 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. S. N. Shajahan and R. C. Roy, “An improved retinal blood vessel segmentation algorithm based on multistructure elements morphology,” International Journal of Computer Applications, vol. 57, pp. 31–36, 2012. View at Google Scholar
  46. J. George and S. P. Indu, “Fast adaptive anisotropic filtering for medical image enhancement,” in Proceedings of the 8th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT '08), pp. 227–232, Sarajevo, Balkans, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  47. D. T. Larose, Discovering Knowledge in Data: An Introduction to Data Mining, John Wiley & Sons, New York, NY, USA, 2005. View at Publisher · View at Google Scholar
  48. A. Fathi and A. R. Naghsh-Nilchi, “Automatic wavelet-based retinal blood vessels segmentation and vessel diameter estimation,” Biomedical Signal Processing and Control, vol. 8, no. 1, pp. 71–80, 2013. View at Publisher · View at Google Scholar · View at Scopus
  49. NVIDIA Corporation, NVIDIA CUDA C programming guide. Version 3.2, 2011.