BioMed Research International

BioMed Research International / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 8365783 |

Khan Bahadar Khan, Muhammad Shahbaz Siddique, Muhammad Ahmad, Manuel Mazzara, "A Hybrid Unsupervised Approach for Retinal Vessel Segmentation", BioMed Research International, vol. 2020, Article ID 8365783, 20 pages, 2020.

A Hybrid Unsupervised Approach for Retinal Vessel Segmentation

Academic Editor: Maurizio Battaglia Parodi
Received22 Jan 2020
Accepted26 Nov 2020
Published12 Dec 2020


Retinal vessel segmentation (RVS) is a significant source of useful information for monitoring, identification, initial medication, and surgical development of ophthalmic disorders. Most common disorders, i.e., stroke, diabetic retinopathy (DR), and cardiac diseases, often change the normal structure of the retinal vascular network. A lot of research has been committed to building an automatic RVS system. But, it is still an open issue. In this article, a framework is recommended for RVS with fast execution and competing outcomes. An initial binary image is obtained by the application of the MISODATA on the preprocessed image. For vessel structure enhancement, B-COSFIRE filters are utilized along with thresholding to obtain another binary image. These two binary images are combined by logical AND-type operation. Then, it is fused with the enhanced image of B-COSFIRE filters followed by thresholding to obtain the vessel location map (VLM). The methodology is verified on four different datasets: DRIVE, STARE, HRF, and CHASE_DB1, which are publicly accessible for benchmarking and validation. The obtained results are compared with the existing competing methods.

1. Introduction

The most essential sensory system for gathering information, navigation, and learning is the human visual system [1]. The retina is the sensitive part of the eye that contains fovea, light receptors, Optic disk, and macula. The retina is a layered tissue, coating the interior of the eye, which is an initial sensor of the communication system and gives a sense of sight. Moreover, it allows understanding the colors, dimensions, and shape of objects by processing the amount of light it reflects or emits. Retina image of an eye is captured with a fundus camera [2]. RGB photographs of the fundus are the protrusion of the internal surface of an eye. Imaging of the retina has emerged swiftly and now one of the most common practices in healthcare and for screening the patients suffering from ophthalmologic or systemic diseases. For identify ing numerous ophthalmologic diseases, the ophthalmologist uses vessel condition as an indicator which is a vital component in retinal fundus images.

Critical diagnostic to eye diseases in human retinal images can be indicated by its shape analysis, its appearance, blood vessels, morphological features, and tortuosity [3]. Structure of RVS is also used for screening of brain and heart stock diseases [4, 5]. Retinal vessel structures play a significant role among other structures in fundus images. RVS is the elementary phase utilized for the examination of retina images [6]. Vascular-related diseases are diagnosed with the help of vessel delineation which is an important component of medical image processing. Additionally, ongoing research in the area of deep learning suggested multiple approaches with emphasis on the separation and the delineation of the vasculature.

The inadequate number of images and having low-contrast in publicly available retina datasets is challenging for deep learning-based research. A dataset having a large number of retina images captured with a different imaging system and under diverse environmental conditions is required to train the supervised network. Deep learning-based methods will aid to control blindness, timely and precise identification of diseases for successful remedy, and thus vividly increase the life quality of patients with eye ailments [7]. RVS is a very difficult task due to many reasons: (1)The structure and formation of retinal vessels are very complex and there is a prominent dissimilarity in various local parts regarding the shape, size, and intensity in vessels.(2)Some structures have the same intensity and shape as vessels, e.g., hemorrhage. Moreover, there are also thin microvessels, whose width is normally between ranges from one to a few pixels and which can be easily mixed with the background. There are irregular illumination in the images and having low-varying contrast [7, 8]. Typically, noise in fundus images is added by the image-capturing procedure such as artifact on the lens or movement of the patient [9]. It is hard to differentiate vessels from other structures that are similar or noises in the retina image. In other words, thicker vessels are more prominent in comparison to the thinner ones as shown in Figure 1(3)Different manual graders have different segmentation results. Manual RVS is also a very hard and tedious task. Over the recent two decades, automatic RVS has caught noteworthy attention and numerous such techniques are developed but they have performance degradation with the change of datasets. Some of the techniques are not fully automatic while others are incapable to handle pathological images. Some of these methods are evaluated on the datasets having a limited number of images while others have problems of oversegmentation or undersegmentation with abnormal images [10]. Hence, the dilemma of perfect RVS is still not answered.

Automated RVS techniques provide incredible support to the ophthalmologist in terms of identification and medication of numerous ophthalmological abnormalities. In this article, an automatic unsupervised approach is developed for RVS that consists a combination of the preprocessing steps, segmentation, vessel structure-based enhancement, and postprocessing steps. The preprocessing steps aim at exterminating noise and improving the contrast of the fundus image. Segmentation is performed by using the Modified Iterative Self Organizing Data Analysis Technique (MISODATA) to acquire a binary image that is fused with the segmented image of the Combination Of Shifted Filter Responses (B-COSFIRE). Then, the fused image is multiplied with the enhanced image of the B-COSFIRE to obtain the initial vessel location map (VLM). Lastly, the VLM and the fused image are combined by logical OR-type operators to obtain final results. In a nutshell, the main contributions of this research are the following: (1)A mask image is not provided with all retina datasets. Automatic masking creation is proposed for each image to extract ROI which suppresses the false positive rate (FPR).(2)The proposed efficient denoising process (preprocessing steps) improves the selection of a suitable threshold.(3)The basic ISODATA algorithm only one-time process the retina image locally and then globally, which sometimes makes it unable to find an optimal threshold. The modified ISODATA technique is introduced to find the global threshold of the entire image which is compared and equated with the individual local threshold of each segment in order to find the optimal threshold for more precise detection of vessels.(4)The vessel location map (VLM) is a new scheme to achieve better performance. In this scheme, the background noise eradication and vessel enhancement are accomplished independently.(5)A distinctive postprocessing steps (AND-type and OR-type operations) to reject misclassified foreground pixels.

Numerous methodologies for RVS have been developed in literature [4, 10]. These methodologies are arranged into two sets: supervised and unsupervised procedures. Supervised techniques utilizing a trained classifier for pixel classification into the foreground or background. Supervised techniques utilized various classifiers, for instance, adaptive boosting (AdaBoost), support vector machines (SVM), neural networks (NN), Gaussian mixture models (GMM) and -nearest neighbors (-NN).

A RVS method utilizing a supervised -NN classifier for isolation of foreground and background pixels was recommended by Niemeijer et al. [11], with a feature vector (FV) formation based on a multiscale (MS) Gaussian filter. Staal et al. [12] projected an equivalent RVS methodology using an FV generated based on a ridge detector. A feed-forward NN built classifier was applied by Marin et al. [13], using 7-D FV generated based on moment-invariant.

An SVM-based approach was presented by Ricci et al. [14], utilizing FV constructed through a rotation-invariant linear operator and pixel intensity. An AdaBoost classifier was suggested by Lupascu et al. [15], utilizing a feature set. An ensemble-based RVS system applying a simple linear iterative clustering (SLIC) algorithm was presented by Wang et al. [16]. A GMM classifier-based scheme was recommended by Roychowdhury et al. [17], utilizing FV extracted from the pixel neighborhood on first and second-order gradient images.

Zhu et al. [18] offered an extreme learning machine(ELM) based RVS scheme utilizing a FV generated by morphological and local attributes combined with attributes extracted from phase congruency, Hessian, and divergence of vector fields (DVF). Tang et al. [19] recommended an SVM-based RVS scheme utilizing an FV created based on MS vessel filtering and the Gabor wavelet features. A random forest classifier-based RVS system was proposed by Aslani et al. [20], utilizing a FV created based on MS and the multiorientation Gabor filter responses and intensity feature combined with feature extracted from vesselness measure and B-COSFIRE filter.

A directionally sensitive vessel enhancement-based scheme combined with NN derived from the U-Net model was presented in [21]. Thangaraj et al. [22] constructed a FV from the Gabor filter responses, Frangi’s vesselness measure (), local binary pattern feature (), Hu moment invariants (), and grey-level cooccurrence matrix features () for RVS utilizing NN-based approach. Memari et al. [23] recommended an arrangement of various enhancement techniques with the AdaBoost classifier to segregate foreground and background pixels.

A three-stage (thick vessel extraction, thin vessel extraction, and vessel fusion-based) deep learning approach were proposed in [24]. Guo et al. [25] suggested an MS deeply supervised network with short connections (BTS-DSN) for RVS. Local intensities, local binary patterns, a histogram of gradients, DVF, higher-order local autocorrelations, and morphological transformation features were used for RVS in [26]. Random forests were used for the selection of feature sets which were utilized in combination with the hierarchical classification methodology to extract the vessels.

Alternatively, unsupervised systems are categorized based on matched filtering (MF), mathematical morphology (MM), and multiscale-based approaches. In matched filtering approaches, thick and thin vessels are extracted by the selection of large and small filter kernels, respectively. However, the application of large kernels can accurately detect major vessels with the misclassification of thin vessels by increasing its width. Similarly, smaller kernels can accurately extract thin vessels along with the extraction of thick vessels in reduced widths. To obtain a complete vascular network, a conventional MF technique can be applied with a large number of diverse filter masks in various directions.

Similar methods were employed using MF [2732], combined filters [33], COSFIRE filters [3, 5, 3436], Gaussian filters [37], wavelet filters [38], and Frangi’s filter [39]. The MM-based approaches are utilized for isolating retinal image segments such as optic disk, macula, fovea, and vasculature. Morphological operators utilized the application of structuring elements (SE) to images for extraction and representation of region contours. A morphological operation for detecting particular structures has the benefit of speed and noise elimination. But they are unable to achieve the known vessel cross-sectional shape. Moreover, there is an issue to extract extremely tortuous vessels in case of superimposing large SE. Morphological operations were utilized for both enhancement and RVS [2, 4044]. On the other hand, retinal blood vessels of variable thickness at various scales were obtained by multiscale approaches [4550].

3. Proposed Model

The complete structure of the proposed RVS framework is introduced in this section. The information and description of every stage are also presented in subsections.

3.1. Overview

The proposed framework consists of two major blocks to obtain a final binary image: (1) retina image denoising and segmentation and (2) vessel structure-based enhancement and segmentation. The key objective of this framework is to extract vasculature excellently along with the elimination of noise and supplementary disease falsifications. The complete structure of the proposed framework is labeled in Figure 2. In which Block-I consists of the selection of suitable retina channel, contrast enhancement, noise filtering, region of interest (ROI) extraction, thresholding, and post processing steps. Block-II includes the application of B-COSFIRE filter, logical operations, and postprocessing steps. The initial binary vessel map of Block-I is fused with the B-COSFIRE filter segmented image in Block-II. Then, it is multiplied with the B-COSFIRE filter-enhanced image which is further thresholded. This output image is combined with the initial postprocessed image by the logical OR-type operation to obtain the final binary.

3.2. Block-I: Retina Image Denoising and Segmentation

In the first block, the retina image is passed through selected techniques to extract the initial denoised vessel map. The green band of the RGB retina image is extracted and nominated for subsequent operation due to its noticeable contrast difference between the vessel and other retina structures. The RGB retina images generally have contrast variations, low resolution and noise. To avoid such variations and produce more appropriate image for further processing, the vessel light reflex elimination and background uniformity operations are performed. Retinal vessel structures have poor reflectance when equated to other retinal planes. Some vessels contain a bright stripe (light reflex) which runs down the central length of the vessel. To overcome this problem, a disc-shape opening operator with a 3-pixel width SE is used on the green plane. A minimal value of disc width is selected to avoid the absorption of close vessels. The background uniformity and smoothness of random salt-and-pepper noise are obtained by the application of a mean filter. Additional noise flattening is achieved with the application of a Gaussian kernel of size , , and variance .

CLAHE [51, 52] is applied on the preprocessed green channel to make vessel structures prominent. The CLAHE operation divides the input image into blocks (size in our case) with the constraint of contrast improvement which is set to 0.01. The clip limit suppresses the noise level and escalates the contrast. The effect of the CLAHE process () along with the green plane is displayed in Figure 3. Histogram-based graphical demonstration of the contrast improvement operations is displayed in Figure 4. An averaging filter of size is applied for smoothness and elimination of anatomical regions (e.g., optic disk, macula, and fovea). symbolizes the output image of the averaging filter. The difference image () is computed for all pixels as follows.

The extra regions of the retinal image are cropped by the utilization of the masking method to extract ROI which reduced the computational complexity. An automatic mask is created from the red band of the retinal image. The reason behind using the red channel for mask construction is that it has a good vessel-background dissimilarity. The automatic mask is created for all datasets because the mask image is not available in some datasets. is thresholded by the MISODATA algorithm. The subsequent procedure is used to compute the threshold level, and the application of MISODATA is shown in Algorithm 1.

1 ;
2 Step 1: compute the mean intensity of image from histogram;
3 set ;
4 ;
5 iteration, let ;
6 ;
7 ;
8 ;
9 M mean intensity of utilizing histogram;
10 Step 2: fordo
11   Compute MAT mean above threshold using from step 1;
12   Compute MBT mean below threshold using from step 1;
13   ;
14   ;
15   ;
16   ;
17   ;
18   ;
19   ifthen
20     go to Step 2;
21   else
22     ;
23   end
24 Step 3: divide the image into square local regions as follows:
25   ;
26   0;
27   ; for
28     fordo
29       ;
30       ;
31       ifthen
32         ;
33       else ifthen
34         ;
35       else
36         ;
37       end
39     end
40   end
41 end

The isolated pixels with an area less than pixels in the image () are trimmed and fused with the B-COSFIRE filter segmented image of Block-II by AND-type operation. The physical stats (eccentricity and area) are utilized for the rejection of nonvessel structures. The vessel structures have a higher area and eccentricity as their pixels are linked and having an elongated structure. Figure 5 indicates the graphical results of the , , and .

3.3. Block-II: Vessel Structure-Based Enhancement and Segmentation

In Block-II, the masked image of the Block-I is used as an input for vessel structure-based enhancement and RVS. B-COSFIRE filter [5] is applied for contrast improvement of vessel structures that will enhance noise also along with the enhancement of vessel structures if the image is not preprocessed. Therefore, the masked image is used for further processing. B-COSFIRE filter produced two results: binary segmented image () and vessel structure-based enhanced image (). The outputs of B-COSFIRE filter are displayed in Figure 6. The AND-type operation is used to combine with that produced output image denoted by . The effect of AND-type operation is shown in Figure 7, which demonstrates that if an alternative operator like OR-type is utilized, it will introduce noise and misclassification. The advantage of using an AND-type operator is exposed in Figure 8 by displaying the visual results with and without using the AND-type operator. The is postprocessed () and multiplied with which is further thresholded to obtain a segmented image (). Pixel-by-pixel multiplication aims at ensuring the detection of vessels at their correct position. The logical OR-type operation is used to produce the final result by coupling of and . The visual effects of the OR-type operator are presented in Figure 9.

The B-COSFIRE filter application includes convolution with difference of Gaussian (DoG) filters, its blurring effects, shifting the blurred responses, and an approximate point-wise weighted geometric mean (GM). A DoG function is given by [5] where is the standard deviation (SD) of the Gaussian function (GF) that decides the range of the boundary. is manually set SD value of the internal GF, and symbolizes the pixel position of the image. Response of DoG filter with kernal function of has been estimated by convolution, where () denotes pixels intensity distribution. where represents the half-wave rectification process to reject negative values.

In the B-COSFIRE filter, three factors () are used to represent each point , where of the DoG filter, while and denote the polar coordinates. This set of parameters is indicated by , where represents the figure of measured DoG responses. The blurring process indicates the calculation of the extreme limit of the weighted thresholded responses of a DoG filter. The blurring operation is shown as follows. where and are constants. Each DoG-blurred outcome is moved in the reverse direction to by a gap , and as a result, they can merge at the support center of the B-COSFIRE filter. Blurred and shifted responses of the DoG filter is indicated by for every tuple in set . The blurred and shifted response of the DoG filter is defined as where . The output of the filter is shown as GM of all the blurred and shifted DoG responses. where and symbolizes the thresholding response at . Equation (6) represents the AND-type operation that is attained by the B-COSFIRE filter only when all DoG filter responses are larger than zero. The overall step-by-step visual results according to the block diagram (Figure 2) are portrayed in Figure 10.

4. Experimental Outcomes and Deliberation

This section will provide the information about datasets, performance metrics, analysis of experimental results, and time complexity of the proposed method.

4.1. Datasets

The proposed system obtained remarkable results on the freely online available datasets: DRIVE [11, 12], STARE [53], HRF [54], and CHASE_DB1 [55]. The magnificence of the framework is justified in terms of assessment with state-of-the-art systems. The datasets used for endorsement of the suggested framework are encapsulated in Table 1. The manually labeled results in all datasets are utilized as a gold standard for performance assessment of the proposed framework.

DatasetImage classificationImage sizeFormat

DRIVETotal 40 imagesJPEG
20 test, 20 training
7 abnormal, 33 normal

STARETotal 20 imagesPPM
10 normal
10 abnormal

HRFTotal 45 imagesJPEG
15 normal, 15 DR
15 glaucomatous

CHASE_DBITotal 28 imagesJPEG
14 left eye
14 right eye

4.2. Performance Judgment Parameters

The quantitative results are obtained by equating the proposed segmentation’s with the manual segmentation available on each dataset. There are numerous performance standards mentioned in the literature. The performance metrics used for evaluation of the proposed framework are visible in Table 2. Six performance standards (Acc, Sn, Sp, AUC, MCC, and CAL) are selected for the justification of the proposed methodology. The Acc metric tells about the overall valuation of the proposed method. Sn is a measure of the quantity of correctly classified vessel pixels, while Sp is an assessment of the competency of differentiating nonvessel pixels. The AUC is the ratio of Sn and Sp. The MCC [5, 56] is a more appropriate indicator of the accuracy of binary categorization in the case of unbalanced structures. For a comprehensive judgment of the superiority of segmentation, the CAL metric [57, 58] is computed. This metric provides justification based on the properties (connectivity-area-length) of the segmented structures beyond the correctly classified image pixels.


Sensitivity (Sn)
Specificity (Sp)1-FPR or
Accuracy (Acc)
Area under ROC curve (AUC)
Matthews correlation coefficient (MCC)
Connectivity-area-length (CAL)

In Table 2, and [58]. The terms TP, TN, FP, and FN denote the true positive (exactly matched vessel pixels), true negative (exactly matched nonvessel pixels), false positive (invalidly predicated vessel pixels), and false negative (invalidly predicated nonvessel pixels), correspondingly.

Let be the extracted final binary image and the corresponding manual segmented image. The considered metric evaluates the following [57, 58]: (i)Connectivity (C): it calculates the fragmentation grade of with respect to the manual segmentation and penalizes fragmented segmentation. It is computed aswhere sums the linked segments while measures the number of vessel pixels in the considered binary image (ii)Area (A): it estimates the intersecting area between and , based on the Jaccard coefficient. Let be a morphological dilation that utilizes a disc structuring element (SE) with a radius of pixels. The magnitude is calculated as follows:The value of controls the tolerance to lines of various sizes. We set (iii)Length (L): it determines the equivalent degree between and by computing the length of the two line networks:where is a skeletonization process and is a morphological dilation with a disc SE of pixel radius. The value of controls the tolerance to dissimilarity of the line tracing output. We set . The final assessment parameter, named CAL, is demarcated as .

4.3. Experimental Results and Inspection

The success of the proposed framework is established by utilizing four freely obtainable datasets: DRIVE, STARE, HRF, and CHASE_DB1 for testing and evaluation. The average performance parameters results in Table 3 are computed by processing 20 test images of the DRIVE and STARE datasets. The performance scores of HRF dataset (15 normal images, 15 DR, and 15 glaucomatous) and CHASE_DB1 are presented in Tables 4 and 5 and Table 6, respectively. The best and worst results within Tables 36 are highlighted in italic font. The best and worst image results from each dataset are selected based on their accuracy’s scores. Their pictorial results are shown in Figures 1114.









The framework performs well on both healthy and pathological images of all selected datasets. The statistical results in Tables 36 validates that the suggested system is robust and has the capability to handle the bright lesions images of the STARE dataset, higher resolution images of the HRF dataset, low resolution images of the DRIVE dataset, and left/right eyes images of the CHASE_DB1 dataset. The anatomical structures are also efficiently omitted to avoid any misclassification.

The average statistical results of the proposed framework on all selected datasets are displayed in Table 7, which reflects that the highest mean score of Acc 0.997, Sn 0.814, Sp 0.997, and AUC 0.905 is achieved on the CHASE_DB1 dataset. The lowest FPR is also observed using the same dataset. The highest value of MCC 0.761 and CAL 0.699 is recorded on the HRF dataset. The highest value of each parameter is italicized in the respective column of the Table 7.


DRIVE ( observer)200.9540.7660.9720.8690.7210.690
DRIVE ( observer)0.9580.7970.9730.8850.7390.696
CHASE_DB1 ( observer)280.9970.7570.970.8770.6290.547
CHASE_DB1 ( observer)0.9960.8140.9960.9050.5690.547

The average performance parameter scores of the proposed framework on the DRIVE and STARE datasets are compared with the existing literature in Table 8, while Table 9 shows the result comparison of the HRF and CHASE_DB1 datasets. The Acc, Sn, and Sp results of all techniques in Tables 8 and 9 are acquired from their respective published articles while the AUC result is calculated by using the formula in Table 2.


Human observer0.9470.7790.9720.8740.9350.8950.9380.917
Unsupervised techniques
Chauduri [31]19890.8770.788
Zana and Klein [42]20010.9380.697
Martinez-Perez [45]20070.9340.7250.9650.8450.9410.7510.9550.853
Zhang [27]20100.9380.948
Bankhead [38]20120.9370.7030.9710.8370.9320.7580.9500.854
Fraz [44]20120.9430.7150.9760.8450.9440.7310.9680.850
Azzopardi [5]20150.9440.7660.9700.8680.9500.7720.9700.871
Oliveira [33]20160.9460.8640.9560.9100.9530.8250.9650.895
Khan [40]20160.9610.7460.9800.8630.9460.7580.9630.861
Biswal [29]20170.9500.7100.9700.8400.9500.7000.9700.835
Khan [32]20170.9440.7540.9640.8590.9480.7520.9560.854
Soomro [63]20170.9430.7520.9760.8640.9610.7840.9810.883
Badawi [3]20180.9550.7910.9710.8810.9530.8650.9610.913
Yue [50]20180.9450.7530.9730.863
Soomro [9]20180.9480.7450.9620.8540.9510.7840.9760.880
Soomro [64]20180.9530.7520.9760.8640.9670.7860.9820.884
Fan [60]20180.9600.7360.9810.8580.9570.7910.9700.880
Khan [65]20190.9510.7700.9650.8680.9510.7520.9810.867
Memari [59]20190.9610.7610.9810.8710.9510.7820.9650.873
Supervised techniques
Niemeijer [11]20040.9420.6900.9700.830
Staal [12]20040.9440.7190.9770.8480.9520.6970.9810.839
Ricci [14]20070.9590.964
Lupascu [15]20100.9590.6730.9870.830
Marín [13]20110.9450.7070.9800.8440.9530.6940.9820.838
Wang [16]20150.9770.8170.9730.8950.9810.8100.9790.894
Roychowdhury [17]20150.9520.7250.9830.8540.9510.7720.9730.873
Aslani [20]20160.9510.7540.9800.8670.9610.7550.9830.869
Zhu [18]20170.9610.7140.9870.851
Thangaraj [22]20170.9610.8010.9750.8880.9430.8340.9540.893
Memari [23]20170.9720.8720.9880.9300.9510.8090.9790.894
Dharmawan [21]20180.8310.9720.9020.7920.9830.887
Yan [24]20180.9540.7630.9820.8730.9640.7740.9860.880
Guo [25]20190.9550.7800.9810.8810.9660.8200.9830.902
Khowaja [26]20190.9750.8180.9710.8950.9750.8240.9750.899
Soomro [8]20190.9590.8020.9740.9480.9610.8010.9690.945
Soomro [62]20190.9560.8700.9850.9860.9680.8480.9860.988
Fan [61]20190.9660.7960.9820.8890.9740.8160.9870.901


Unsupervised techniques
Odstrcilik [54]20130.9490.7740.9670.871
Azzopardi [5]20150.9390.7590.9590.859
Zhang [66]20160.9570.7980.9740.8860.9460.7630.9680.866
Biswal [29]20170.9400.7600.9700.865
Rodrigues [67]20170.9480.7220.9640.843
Badawi [3]20180.9530.8000.9640.882
Supervised techniques
Roychowdhury [17]20150.9530.7200.9820.851
Thangaraj [22]20170.9470.6290.9730.797
Memari [23]20170.9480.8190.9590.889
Dharmawan [21]20180.8130.9770.895
Yan [24]20180.9610.7640.9810.873
Fan [60]20180.9510.6570.9730.815
Guo [25]20190.9630.7890.9800.885
Khowaja [26]20190.9520.7560.9760.866
Soomro [62]20190.9620.8290.9620.9780.9760.8860.9820.985
Fan [61]20190.9760.8240.9870.9050.9710.8020.9850.893

In Table 8, the obtained results of the framework are compared with 19 unsupervised and 18 supervised existing techniques. The proposed framework achieved the highest Acc result than all unsupervised methods on the DRIVE dataset except Khan et al. [40], Memari et al. [59] which is 0.003%, and Fan et al. [60] which is 0.002% better than ours. The supervised methods Ricci and Perfetti [14], Lupascu et al. [15], Wang et al. [16], Zhu et al. [18], Thangaraj et al. [22], Memari et al. [23], Khowaja et al. [26], and Fan et al. [61] show 0.001%, 0.001%, 0.019%, 0.003%, 0.003%, 0.014%, 0.017%, and 0.008% better results than the proposed method, respectively. But some of these methods are only validated on one dataset, which reflects that they are tuned for a single dataset. Some of these methods produce a very low AUC score, which is a trade-off between Sn and Sp. Moreover, supervised methods are computationally very expensive. In the case of the STARE dataset, the framework produced highest Acc scores than all other methods. Table 9 reflects that there are very few techniques that used both HRF and CHASE_DB1 datasets for validation. The Acc score of the framework is higher than both supervised and unsupervised approaches on the HRF and CHASE_DB1 datasets except Soomro et al. [62] and Fan et al. [61] which is slightly higher than ours on HRF dataset only. Fan et al. [61] showed higher Sp value than all other methods on the HRF dataset. The highest Sp value on CHASE_DB1 dataset is obtained by the proposed method. All the other supervised and unsupervised methods acquired a bit greater or equivalent values of Sn and AUC metric on the HRF and CHASE_DB1 datasets as compared to ours.

In Table 10, the MCC and CAL values are recorded by the proposed method and other existing supervised and unsupervised methods. The MCC and CAL values of Chauduri et al. [31], Niemeijer et al. [11], Hoover et al. [53], and B-COSFIRE [5] are calculated by utilizing their publicly accessible segmented images. The results of Fraz et al. [68, 69], RUSTICO [58], Yang et al. [70, 71], Vega et al. [72], FC-CRF [73], and UP-CRF [73] are extracted from their published articles.


Unsupervised techniques
Chauduri [31]19890.4200.208
Hoover [53]20000.6150.534
Fraz [68]20110.7330.700
Fraz [69]20130.7360.691
B-COSFIRE [5]20150.7190.7210.6980.7090.6860.5770.6560.608
RUSTICO [58]20190.7290.7280.6980.7090.6910.5870.6630.620
Supervised techniques
Yang [70]20190.7360.7040.712
Yang [71]20180.7250.6620.682
FC-CRF [73]20160.7560.7310.7270.6580.6900.5410.7040.622
UP-CRF [73]20160.7400.6750.7260.6650.6770.4750.6890.571
Vega [72]20150.6620.640
Niemeijer [11]20040.7220.659

The average value of MCC attained by the proposed method is higher than all compared unsupervised approaches on the DRIVE, STARE, and HRF datasets, while it is statistically lower than the supervised methods (i.e., FC-CRF [73] and UP-CRF [73]) on the DRIVE, STARE, and CHASE_DB1 datasets. The CAL value of the proposed method is observed higher than all supervised and unsupervised methods on the HRF dataset, while it is statistically lower than or equivalent to CAL values of other methods on the DRIVE, STARE, and CHASE_DB1 datasets.

4.3.1. Processing Time

The proposed framework processes a single image in a very short time as equated to other approaches in Table 11. The time values are computed on the single image taken from the DRIVE and STARE datasets.

MethodTimeHardware particulars

Roychowdhury [17]3.11 secIntel Core i3 CPU 2.6 GHz, 2 GB RAM
Zhu [18]12.160 sec4.0 GHz Intel i7-4790K CPU and 32 GB RAM
Memari [23]8.2 minsIntel i5-M480 CPU, 2.67 GHz, 4 GB RAM
Biswal [29]3.3 secIntel i3 (4010U CPU) 1.7 GHz, 4 GB RAM
Badawi [3]8 secCPU 2.7 GHz, 16 GB RAM
Yue [50]4.6 secIntel i5-6200U CPU 2.3 GHz, 8 GB RAM
Khan [39]6.1 sec5Intel Core i3 CPU, 2.53 GHz, 4 GB RAM
Khan [40]1.56 sec
Azzopardi [5]11.83 sec
Vlachos [47]9.3 sec
Bankhead [38]22.45 sec
Proposed5.5 sec

5. Conclusion

Vessel extraction is momentous for inspecting abnormalities inside and around the retinal periphery. The retinal vessel segmentation is a challenging task due to the existence of pathologies, unpredictable dimensions and contour of the vessels, nonuniform clarification, and structural inconsistency between subjects. The proposed methodology is consistent, faster, and completely automated for isolation of retinal vascular network. The success of the proposed framework is evidently revealed by the RVS statistics on the DRIVE, STARE, HRF, and CHASE_DB1 datasets. The eradication of anomalous structures prior to enhancement boosted the efficiency of the proposed method. The application of logical operators avoids misclassification of foreground pixels which enhances the accuracy and makes the method robust. Pictorial representation validates that the framework is able to segment both healthy and unhealthy images. Furthermore, the method does not include any hand-marked data by experts for training, which makes it computationally fast.

Data Availability

All the data are fully available within the manuscript without any restriction.

Conflicts of Interest

The authors declare no conflict of interest.


  1. M. Hashemzadeh and B. A. Azar, “Retinal blood vessel extraction employing effective image features and combination of supervised and unsupervised machine learning methods,” Artificial Intelligence in Medicine, vol. 95, pp. 1–15, 2019. View at: Publisher Site | Google Scholar
  2. P. Bibiloni, M. González-Hidalgo, and S. Massanet, “A real-time fuzzy morphological algorithm for retinal vessel segmentation,” Journal of Real-Time Image Processing, vol. 16, pp. 2337–2350, 2019. View at: Publisher Site | Google Scholar
  3. S. A. Badawi and M. M. Fraz, “Optimizing the trainable b-cosfire filter for retinal blood vessel segmentation,” PeerJ, vol. 6, article e5855, 2018. View at: Publisher Site | Google Scholar
  4. J. Almotiri, K. Elleithy, and A. Elleithy, “Retinal vessels segmentation techniques and algorithms: a survey,” Applied Sciences, vol. 8, no. 2, p. 155, 2018. View at: Publisher Site | Google Scholar
  5. G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Medical Image Analysis, vol. 19, no. 1, pp. 46–57, 2015. View at: Publisher Site | Google Scholar
  6. Z. Yan, X. Yang, and K.-T. Cheng, “Joint segment-level and pixelwise losses for deep learning based retinal vessel segmentation,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 9, pp. 1912–1923, 2018. View at: Publisher Site | Google Scholar
  7. T. A. Soomro, A. J. Afifi, L. Zheng et al., “Deep learning models for retinal blood vessels segmentation: a review,” IEEE Access, vol. 7, pp. 71696–71717, 2019. View at: Publisher Site | Google Scholar
  8. T. A. Soomro, A. J. Afifi, A. A. Shah et al., “Impact of image enhancement technique on cnn model for retinal blood vessels segmentation,” IEEE Access, vol. 7, pp. 158183–158197, 2019. View at: Publisher Site | Google Scholar
  9. T. A. Soomro, J. Gao, Z. Lihong, A. J. Afifi, S. Soomro, and M. Paul, “Retinal blood vessels extraction of challenging images,” in Data Mining. AusDM 2018. Communications in Computer and Information Science, vol 996, R. Islam, Y. S. Koh, Y. Zhao et al., Eds., pp. 347–359, Springer, Singapore, 2018. View at: Publisher Site | Google Scholar
  10. K. B. Khan, A. A. Khaliq, A. Jalil et al., “A review of retinal blood vessels extraction techniques: challenges, taxonomy, and future trends,” Pattern Analysis and Applications, vol. 22, no. 3, pp. 767–802, 2019. View at: Publisher Site | Google Scholar
  11. M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” in Proceedings Volume 5370, Medical Imaging 2004: Image Processing, pp. 648–656, San Diego, CA, USA, May 2004. View at: Publisher Site | Google Scholar
  12. J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501–509, 2004. View at: Publisher Site | Google Scholar
  13. D. Marín, A. Aquino, M. E. Gegundez-Arias, and J. M. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Transactions on Medical Imaging, vol. 30, no. 1, pp. 146–158, 2010. View at: Publisher Site | Google Scholar
  14. E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Transactions on Medical Imaging, vol. 26, no. 10, pp. 1357–1365, 2007. View at: Publisher Site | Google Scholar
  15. C. A. Lupascu, D. Tegolo, and E. Trucco, “FABC: retinal vessel segmentation using AdaBoost,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 5, pp. 1267–1274, 2010. View at: Publisher Site | Google Scholar
  16. S. Wang, Y. Yin, G. Cao, B. Wei, Y. Zheng, and G. Yang, “Hierarchical retinal blood vessel segmentation based on feature and ensemble learning,” Neurocomputing, vol. 149, pp. 708–717, 2015. View at: Publisher Site | Google Scholar
  17. S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi, “Blood vessel segmentation of fundus images by major vessel extraction and subimage classification,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 3, pp. 1118–1128, 2014. View at: Publisher Site | Google Scholar
  18. C. Zhu, B. Zou, R. Zhao et al., “Retinal vessel segmentation in colour fundus images using extreme learning machine,” Computerized Medical Imaging and Graphics, vol. 55, pp. 68–77, 2017. View at: Publisher Site | Google Scholar
  19. S. Tang, T. Lin, J. Yang, J. Fan, D. Ai, and Y. Wang, “Retinal vessel segmentation using supervised classification based on multi-scale vessel filtering and Gabor wavelet,” Journal of Medical Imaging and Health Informatics, vol. 5, no. 7, pp. 1571–1574, 2015. View at: Publisher Site | Google Scholar
  20. S. Aslani and H. Sarnel, “A new supervised retinal vessel segmentation method based on robust hybrid features,” Biomedical Signal Processing and Control, vol. 30, pp. 1–12, 2016. View at: Publisher Site | Google Scholar
  21. D. A. Dharmawan, D. Li, B. P. Ng, and S. Rahardja, “A new hybrid algorithm for retinal vessels segmentation on fundus images,” IEEE Access, vol. 7, pp. 41885–41896, 2019. View at: Publisher Site | Google Scholar
  22. S. Thangaraj, V. Periyasamy, and R. Balaji, “Retinal vessel segmentation using neural network,” IET Image Processing, vol. 12, no. 5, pp. 669–678, 2017. View at: Publisher Site | Google Scholar
  23. N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, “Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier,” PLoS One, vol. 12, no. 12, article e0188939, 2017. View at: Publisher Site | Google Scholar
  24. Z. Yan, X. Yang, and K.-T. T. Cheng, “A three-stage deep learning model for accurate retinal vessel segmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 4, pp. 1427–1436, 2019. View at: Publisher Site | Google Scholar
  25. S. Guo, K. Wang, H. Kang, Y. Zhang, Y. Gao, and T. Li, “BTS-DSN: deeply supervised neural network with short connections for retinal vessel segmentation,” International Journal of Medical Informatics, vol. 126, pp. 105–113, 2019. View at: Publisher Site | Google Scholar
  26. S. A. Khowaja, P. Khuwaja, and I. A. Ismaili, “A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification,” Signal, Image and Video Processing, vol. 13, no. 2, pp. 379–387, 2019. View at: Publisher Site | Google Scholar
  27. B. Zhang, L. Zhang, L. Zhang, and F. Karray, “Retinal vessel extraction by matched filter with first-order derivative of gaussian,” Computers in Biology and Medicine, vol. 40, no. 4, pp. 438–445, 2010. View at: Publisher Site | Google Scholar
  28. A. A. Mudassar and S. Butt, “Extraction of blood vessels in retinal images using four different techniques,” Journal of Medical Engineering, vol. 2013, Article ID 408120, 21 pages, 2013. View at: Publisher Site | Google Scholar
  29. B. Biswal, T. Pooja, and N. B. Subrahmanyam, “Robust retinal blood vessel segmentation using line detectors with multiple masks,” IET Image Processing, vol. 12, no. 3, pp. 389–399, 2017. View at: Publisher Site | Google Scholar
  30. D. A. Dharmawan and B. P. Ng, “A new two-dimensional matched filter based on the modified Chebyshev type I function for retinal vessels detection,” in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 369–372, Seogwipo, South Korea, July 2017. View at: Publisher Site | Google Scholar
  31. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989. View at: Publisher Site | Google Scholar
  32. M. A. Khan, T. M. Khan, T. A. Soomro, N. Mir, and J. Gao, “Boosting sensitivity of a retinal vessel segmentation algorithm,” Pattern Analysis and Applications, vol. 22, no. 2, pp. 583–599, 2019. View at: Publisher Site | Google Scholar
  33. W. S. Oliveira, J. V. Teixeira, T. I. Ren, G. D. Cavalcanti, and J. Sijbers, “Unsupervised retinal vessel segmentation using combined filters,” PLoS One, vol. 11, no. 2, article e0149943, 2016. View at: Publisher Site | Google Scholar
  34. N. Strisciuglio, G. Azzopardi, M. Vento, and N. Petkov, “Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters,” Machine Vision and Applications, vol. 27, no. 8, pp. 1137–1149, 2016. View at: Publisher Site | Google Scholar
  35. K. B. Khan, A. A. Khaliq, and M. Shahid, “B-COSFIRE filter and VLM based retinal blood vessels segmentation and denoising,” in 2016 International Conference on Computing, Electronic and Electrical Engineering (ICE Cube), pp. 132–137, Quetta, Pakistan, April 2016. View at: Publisher Site | Google Scholar
  36. G. Azzopardi and N. Petkov, “Automatic detection of vascular bifurcations in segmented retinal images using trainable COSFIRE filters,” Pattern Recognition Letters, vol. 34, no. 8, pp. 922–933, 2013. View at: Publisher Site | Google Scholar
  37. L. Gang, O. Chutatape, and S. M. Krishnan, “Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 2, pp. 168–172, 2002. View at: Publisher Site | Google Scholar
  38. P. Bankhead, C. N. Scholfield, J. G. McGeown, and T. M. Curtis, “Fast retinal vessel detection and measurement using wavelets and edge location refinement,” PLoS One, vol. 7, no. 3, article e32435, 2012. View at: Publisher Site | Google Scholar
  39. K. B. Khan, A. A. Khaliq, and M. Shahid, “A novel fast GLM approach for retinal vascular segmentation and denoising,” Journal of Information Science and Engineering, vol. 33, no. 6, pp. 1611–1627, 2017. View at: Publisher Site | Google Scholar
  40. K. BahadarKhan, A. A. Khaliq, and M. Shahid, “A morphological hessian based approach for retinal blood vessels segmentation and denoising using region based otsu thresholding,” PLoS One, vol. 11, no. 7, article e0158996, 2016. View at: Publisher Site | Google Scholar
  41. L. C. Neto, G. L. Ramalho, J. F. R. Neto, R. M. Veras, and F. N. Medeiros, “An unsupervised coarse-to-fine algorithm for blood vessel segmentation in fundus images,” Expert Systems with Applications, vol. 78, pp. 182–192, 2017. View at: Publisher Site | Google Scholar
  42. F. Zana and J.-C. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Transactions on Image Processing, vol. 10, no. 7, pp. 1010–1019, 2001. View at: Publisher Site | Google Scholar
  43. G. Ayala, T. Leon, and V. Zapater, “Different averages of a fuzzy set with an application to vessel segmentation,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 3, pp. 384–393, 2005. View at: Publisher Site | Google Scholar
  44. M. M. Fraz, S. A. Barman, P. Remagnino et al., “An approach to localize the retinal blood vessels using bit planes and centerline detection,” Computer Methods and Programs in Biomedicine, vol. 108, no. 2, pp. 600–616, 2012. View at: Publisher Site | Google Scholar
  45. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Medical Image Analysis, vol. 11, no. 1, pp. 47–61, 2007. View at: Publisher Site | Google Scholar
  46. D. J. Farnell, F. Hatfield, P. Knox et al., “Enhancement of blood vessels in digital fundus photographs via the application of multiscale line operators,” Journal of the Franklin Institute, vol. 345, no. 7, pp. 748–765, 2008. View at: Publisher Site | Google Scholar
  47. M. Vlachos and E. Dermatas, “Multi-scale retinal vessel segmentation using line tracking,” Computerized Medical Imaging and Graphics, vol. 34, no. 3, pp. 213–227, 2010. View at: Publisher Site | Google Scholar
  48. R. Annunziata, A. Garzelli, L. Ballerini, A. Mecocci, and E. Trucco, “Leveraging multiscale hessian-based enhancement with a novel exudate inpainting technique for retinal vessel segmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 4, pp. 1129–1138, 2016. View at: Publisher Site | Google Scholar
  49. D. Gou, Y. Wei, H. Fu, and N. Yan, “Retinal vessel extraction using dynamic multi-scale matched filtering and dynamic threshold processing based on histogram fitting,” Machine Vision and Applications, vol. 29, no. 4, pp. 655–666, 2018. View at: Publisher Site | Google Scholar
  50. K. Yue, B. Zou, Z. Chen, and Q. Liu, “Improved multi-scale line detection method for retinal blood vessel segmentation,” IET Image Processing, vol. 12, no. 8, pp. 1450–1457, 2018. View at: Publisher Site | Google Scholar
  51. S. M. Pizer, E. P. Amburn, J. D. Austin et al., “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, 1987. View at: Publisher Site | Google Scholar
  52. A. M. Reza, “Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement,” Journal of VLSI Signal Processing Systems for Signal, Image and Video Technology, vol. 38, no. 1, pp. 35–44, 2004. View at: Publisher Site | Google Scholar
  53. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, 2000. View at: Publisher Site | Google Scholar
  54. J. Odstrcilik, R. Kolar, A. Budai et al., “Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database,” IET Image Processing, vol. 7, no. 4, pp. 373–383, 2013. View at: Publisher Site | Google Scholar
  55. C. G. Owen, A. R. Rudnicka, R. Mullen et al., “Measuring retinal vessel tortuosity in 10-year-old children: validation of the computer-assisted image analysis of the retina (CAIAR) program,” Investigative Ophthalmology & Visual Science, vol. 50, no. 5, pp. 2004–2010, 2009. View at: Publisher Site | Google Scholar
  56. S. Boughorbel, F. Jarray, and M. El-Anbari, “Optimal classifier for imbalanced data using Matthews correlation coefficient metric,” PLoS One, vol. 12, no. 6, article e0177678, 2017. View at: Publisher Site | Google Scholar
  57. M. E. Gegundez-Arias, A. Aquino, J. M. Bravo, and D. Marin, “A function for quality evaluation of retinal vessel segmentations,” IEEE Transactions on Medical Imaging, vol. 31, no. 2, pp. 231–239, 2011. View at: Publisher Site | Google Scholar
  58. N. Strisciuglio, G. Azzopardi, and N. Petkov, “Robust inhibition-augmented operator for delineation of curvilinear structures,” IEEE Transactions on Image Processing, vol. 28, no. 12, pp. 5852–5866, 2019. View at: Publisher Site | Google Scholar
  59. N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, “Retinal blood vessel segmentation by using matched filtering and fuzzy c-means clustering with integrated level set method for diabetic retinopathy assessment,” Journal of Medical and Biological Engineering, vol. 39, no. 5, pp. 713–731, 2019. View at: Publisher Site | Google Scholar
  60. Z. Fan, J. Lu, C. Wei, H. Huang, X. Cai, and X. Chen, “A hierarchical image matting model for blood vessel segmentation in fundus images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2367–2377, 2018. View at: Publisher Site | Google Scholar
  61. Z. Fan, J. Mo, and B. Qiu, “Accurate retinal vessel segmentation via octave convolution neural network,” 2019, View at: Google Scholar
  62. T. A. Soomro, A. J. Afifi, J. Gao, O. Hellwich, L. Zheng, and M. Paul, “Strided fully convolutional neural network for boosting the sensitivity of retinal blood vessels segmentation,” Expert Systems with Applications, vol. 134, pp. 36–52, 2019. View at: Publisher Site | Google Scholar
  63. T. A. Soomro, M. A. Khan, J. Gao, T. M. Khan, and M. Paul, “Contrast normalization steps for increased sensitivity of a retinal image segmentation method,” Signal, Image and Video Processing, vol. 11, no. 8, pp. 1509–1517, 2017. View at: Publisher Site | Google Scholar
  64. T. A. Soomro, T. M. Khan, M. A. Khan, J. Gao, M. Paul, and L. Zheng, “Impact of ICA-based image enhancement technique on retinal blood vessels segmentation,” IEEE Access, vol. 6, pp. 3524–3538, 2018. View at: Publisher Site | Google Scholar
  65. M. A. Khan, T. M. Khan, D. Bailey, and T. A. Soomro, “A generalized multi-scale line-detection method to boost retinal vessel segmentation sensitivity,” Pattern Analysis and Applications, vol. 22, no. 3, pp. 1177–1196, 2019. View at: Publisher Site | Google Scholar
  66. J. Zhang, B. Dashtbozorg, E. Bekkers, J. P. Pluim, R. Duits, and B. M. ter Haar Romeny, “Robust retinal vessel segmentation via locally adaptive derivative frames in orientation scores,” IEEE Transactions on Medical Imaging, vol. 35, no. 12, pp. 2631–2644, 2016. View at: Publisher Site | Google Scholar
  67. L. C. Rodrigues and M. Marengoni, “Segmentation of optic disc and blood vessels in retinal images using wavelets, mathematical morphology and hessian-based multi-scale filtering,” Biomedical Signal Processing and Control, vol. 36, pp. 39–49, 2017. View at: Publisher Site | Google Scholar
  68. M. M. Fraz, P. Remagnino, A. Hoppe et al., “Retinal vessel extraction using firstorder derivative of Gaussian and morphological processing,” in Advances in Visual Computing. ISVC 2011. Lecture Notes in Computer Science, vol 6938, G. Bebis, R. Boyle, B. Parvin et al., Eds., pp. 410–420, Springer, Berlin, Heidelberg, 2011. View at: Publisher Site | Google Scholar
  69. M. M. Fraz, A. Basit, and S. Barman, “Application of morphological bit planes in retinal blood vessel extraction,” Journal of Digital Imaging, vol. 26, no. 2, pp. 274–286, 2013. View at: Publisher Site | Google Scholar
  70. Y. Yang, F. Shao, Z. Fu, and R. Fu, “Discriminative dictionary learning for retinal vessel segmentation using fusion of multiple features,” Signal, Image and Video Processing, vol. 13, pp. 1529–1537, 2019. View at: Publisher Site | Google Scholar
  71. Y. Yang, F. Shao, Z. Fu, and R. Fu, “Blood vessel segmentation of fundus images via cross-modality dictionary learning,” Applied Optics, vol. 57, no. 25, pp. 7287–7295, 2018. View at: Publisher Site | Google Scholar
  72. R. Vega, G. Sanchez-Ante, L. E. Falcon-Morales, H. Sossa, and E. Guevara, “Retinal vessel extraction using lattice neural networks with dendritic processing,” Computers in Biology and Medicine, vol. 58, pp. 20–30, 2015. View at: Publisher Site | Google Scholar
  73. J. I. Orlando, E. Prokofyeva, and M. B. Blaschko, “A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 1, pp. 16–27, 2017. View at: Publisher Site | Google Scholar

Copyright © 2020 Khan Bahadar Khan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles