Research Article | Open Access
Khan Bahadar Khan, Muhammad Shahbaz Siddique, Muhammad Ahmad, Manuel Mazzara, "A Hybrid Unsupervised Approach for Retinal Vessel Segmentation", BioMed Research International, vol. 2020, Article ID 8365783, 20 pages, 2020. https://doi.org/10.1155/2020/8365783
A Hybrid Unsupervised Approach for Retinal Vessel Segmentation
Retinal vessel segmentation (RVS) is a significant source of useful information for monitoring, identification, initial medication, and surgical development of ophthalmic disorders. Most common disorders, i.e., stroke, diabetic retinopathy (DR), and cardiac diseases, often change the normal structure of the retinal vascular network. A lot of research has been committed to building an automatic RVS system. But, it is still an open issue. In this article, a framework is recommended for RVS with fast execution and competing outcomes. An initial binary image is obtained by the application of the MISODATA on the preprocessed image. For vessel structure enhancement, B-COSFIRE filters are utilized along with thresholding to obtain another binary image. These two binary images are combined by logical AND-type operation. Then, it is fused with the enhanced image of B-COSFIRE filters followed by thresholding to obtain the vessel location map (VLM). The methodology is verified on four different datasets: DRIVE, STARE, HRF, and CHASE_DB1, which are publicly accessible for benchmarking and validation. The obtained results are compared with the existing competing methods.
The most essential sensory system for gathering information, navigation, and learning is the human visual system . The retina is the sensitive part of the eye that contains fovea, light receptors, Optic disk, and macula. The retina is a layered tissue, coating the interior of the eye, which is an initial sensor of the communication system and gives a sense of sight. Moreover, it allows understanding the colors, dimensions, and shape of objects by processing the amount of light it reflects or emits. Retina image of an eye is captured with a fundus camera . RGB photographs of the fundus are the protrusion of the internal surface of an eye. Imaging of the retina has emerged swiftly and now one of the most common practices in healthcare and for screening the patients suffering from ophthalmologic or systemic diseases. For identify ing numerous ophthalmologic diseases, the ophthalmologist uses vessel condition as an indicator which is a vital component in retinal fundus images.
Critical diagnostic to eye diseases in human retinal images can be indicated by its shape analysis, its appearance, blood vessels, morphological features, and tortuosity . Structure of RVS is also used for screening of brain and heart stock diseases [4, 5]. Retinal vessel structures play a significant role among other structures in fundus images. RVS is the elementary phase utilized for the examination of retina images . Vascular-related diseases are diagnosed with the help of vessel delineation which is an important component of medical image processing. Additionally, ongoing research in the area of deep learning suggested multiple approaches with emphasis on the separation and the delineation of the vasculature.
The inadequate number of images and having low-contrast in publicly available retina datasets is challenging for deep learning-based research. A dataset having a large number of retina images captured with a different imaging system and under diverse environmental conditions is required to train the supervised network. Deep learning-based methods will aid to control blindness, timely and precise identification of diseases for successful remedy, and thus vividly increase the life quality of patients with eye ailments . RVS is a very difficult task due to many reasons: (1)The structure and formation of retinal vessels are very complex and there is a prominent dissimilarity in various local parts regarding the shape, size, and intensity in vessels.(2)Some structures have the same intensity and shape as vessels, e.g., hemorrhage. Moreover, there are also thin microvessels, whose width is normally between ranges from one to a few pixels and which can be easily mixed with the background. There are irregular illumination in the images and having low-varying contrast [7, 8]. Typically, noise in fundus images is added by the image-capturing procedure such as artifact on the lens or movement of the patient . It is hard to differentiate vessels from other structures that are similar or noises in the retina image. In other words, thicker vessels are more prominent in comparison to the thinner ones as shown in Figure 1(3)Different manual graders have different segmentation results. Manual RVS is also a very hard and tedious task. Over the recent two decades, automatic RVS has caught noteworthy attention and numerous such techniques are developed but they have performance degradation with the change of datasets. Some of the techniques are not fully automatic while others are incapable to handle pathological images. Some of these methods are evaluated on the datasets having a limited number of images while others have problems of oversegmentation or undersegmentation with abnormal images . Hence, the dilemma of perfect RVS is still not answered.
Automated RVS techniques provide incredible support to the ophthalmologist in terms of identification and medication of numerous ophthalmological abnormalities. In this article, an automatic unsupervised approach is developed for RVS that consists a combination of the preprocessing steps, segmentation, vessel structure-based enhancement, and postprocessing steps. The preprocessing steps aim at exterminating noise and improving the contrast of the fundus image. Segmentation is performed by using the Modified Iterative Self Organizing Data Analysis Technique (MISODATA) to acquire a binary image that is fused with the segmented image of the Combination Of Shifted Filter Responses (B-COSFIRE). Then, the fused image is multiplied with the enhanced image of the B-COSFIRE to obtain the initial vessel location map (VLM). Lastly, the VLM and the fused image are combined by logical OR-type operators to obtain final results. In a nutshell, the main contributions of this research are the following: (1)A mask image is not provided with all retina datasets. Automatic masking creation is proposed for each image to extract ROI which suppresses the false positive rate (FPR).(2)The proposed efficient denoising process (preprocessing steps) improves the selection of a suitable threshold.(3)The basic ISODATA algorithm only one-time process the retina image locally and then globally, which sometimes makes it unable to find an optimal threshold. The modified ISODATA technique is introduced to find the global threshold of the entire image which is compared and equated with the individual local threshold of each segment in order to find the optimal threshold for more precise detection of vessels.(4)The vessel location map (VLM) is a new scheme to achieve better performance. In this scheme, the background noise eradication and vessel enhancement are accomplished independently.(5)A distinctive postprocessing steps (AND-type and OR-type operations) to reject misclassified foreground pixels.
2. Related Works
Numerous methodologies for RVS have been developed in literature [4, 10]. These methodologies are arranged into two sets: supervised and unsupervised procedures. Supervised techniques utilizing a trained classifier for pixel classification into the foreground or background. Supervised techniques utilized various classifiers, for instance, adaptive boosting (AdaBoost), support vector machines (SVM), neural networks (NN), Gaussian mixture models (GMM) and -nearest neighbors (-NN).
A RVS method utilizing a supervised -NN classifier for isolation of foreground and background pixels was recommended by Niemeijer et al. , with a feature vector (FV) formation based on a multiscale (MS) Gaussian filter. Staal et al.  projected an equivalent RVS methodology using an FV generated based on a ridge detector. A feed-forward NN built classifier was applied by Marin et al. , using 7-D FV generated based on moment-invariant.
An SVM-based approach was presented by Ricci et al. , utilizing FV constructed through a rotation-invariant linear operator and pixel intensity. An AdaBoost classifier was suggested by Lupascu et al. , utilizing a feature set. An ensemble-based RVS system applying a simple linear iterative clustering (SLIC) algorithm was presented by Wang et al. . A GMM classifier-based scheme was recommended by Roychowdhury et al. , utilizing FV extracted from the pixel neighborhood on first and second-order gradient images.
Zhu et al.  offered an extreme learning machine(ELM) based RVS scheme utilizing a FV generated by morphological and local attributes combined with attributes extracted from phase congruency, Hessian, and divergence of vector fields (DVF). Tang et al.  recommended an SVM-based RVS scheme utilizing an FV created based on MS vessel filtering and the Gabor wavelet features. A random forest classifier-based RVS system was proposed by Aslani et al. , utilizing a FV created based on MS and the multiorientation Gabor filter responses and intensity feature combined with feature extracted from vesselness measure and B-COSFIRE filter.
A directionally sensitive vessel enhancement-based scheme combined with NN derived from the U-Net model was presented in . Thangaraj et al.  constructed a FV from the Gabor filter responses, Frangi’s vesselness measure (), local binary pattern feature (), Hu moment invariants (), and grey-level cooccurrence matrix features () for RVS utilizing NN-based approach. Memari et al.  recommended an arrangement of various enhancement techniques with the AdaBoost classifier to segregate foreground and background pixels.
A three-stage (thick vessel extraction, thin vessel extraction, and vessel fusion-based) deep learning approach were proposed in . Guo et al.  suggested an MS deeply supervised network with short connections (BTS-DSN) for RVS. Local intensities, local binary patterns, a histogram of gradients, DVF, higher-order local autocorrelations, and morphological transformation features were used for RVS in . Random forests were used for the selection of feature sets which were utilized in combination with the hierarchical classification methodology to extract the vessels.
Alternatively, unsupervised systems are categorized based on matched filtering (MF), mathematical morphology (MM), and multiscale-based approaches. In matched filtering approaches, thick and thin vessels are extracted by the selection of large and small filter kernels, respectively. However, the application of large kernels can accurately detect major vessels with the misclassification of thin vessels by increasing its width. Similarly, smaller kernels can accurately extract thin vessels along with the extraction of thick vessels in reduced widths. To obtain a complete vascular network, a conventional MF technique can be applied with a large number of diverse filter masks in various directions.
Similar methods were employed using MF [27–32], combined filters , COSFIRE filters [3, 5, 34–36], Gaussian filters , wavelet filters , and Frangi’s filter . The MM-based approaches are utilized for isolating retinal image segments such as optic disk, macula, fovea, and vasculature. Morphological operators utilized the application of structuring elements (SE) to images for extraction and representation of region contours. A morphological operation for detecting particular structures has the benefit of speed and noise elimination. But they are unable to achieve the known vessel cross-sectional shape. Moreover, there is an issue to extract extremely tortuous vessels in case of superimposing large SE. Morphological operations were utilized for both enhancement and RVS [2, 40–44]. On the other hand, retinal blood vessels of variable thickness at various scales were obtained by multiscale approaches [45–50].
3. Proposed Model
The complete structure of the proposed RVS framework is introduced in this section. The information and description of every stage are also presented in subsections.
The proposed framework consists of two major blocks to obtain a final binary image: (1) retina image denoising and segmentation and (2) vessel structure-based enhancement and segmentation. The key objective of this framework is to extract vasculature excellently along with the elimination of noise and supplementary disease falsifications. The complete structure of the proposed framework is labeled in Figure 2. In which Block-I consists of the selection of suitable retina channel, contrast enhancement, noise filtering, region of interest (ROI) extraction, thresholding, and post processing steps. Block-II includes the application of B-COSFIRE filter, logical operations, and postprocessing steps. The initial binary vessel map of Block-I is fused with the B-COSFIRE filter segmented image in Block-II. Then, it is multiplied with the B-COSFIRE filter-enhanced image which is further thresholded. This output image is combined with the initial postprocessed image by the logical OR-type operation to obtain the final binary.
3.2. Block-I: Retina Image Denoising and Segmentation
In the first block, the retina image is passed through selected techniques to extract the initial denoised vessel map. The green band of the RGB retina image is extracted and nominated for subsequent operation due to its noticeable contrast difference between the vessel and other retina structures. The RGB retina images generally have contrast variations, low resolution and noise. To avoid such variations and produce more appropriate image for further processing, the vessel light reflex elimination and background uniformity operations are performed. Retinal vessel structures have poor reflectance when equated to other retinal planes. Some vessels contain a bright stripe (light reflex) which runs down the central length of the vessel. To overcome this problem, a disc-shape opening operator with a 3-pixel width SE is used on the green plane. A minimal value of disc width is selected to avoid the absorption of close vessels. The background uniformity and smoothness of random salt-and-pepper noise are obtained by the application of a mean filter. Additional noise flattening is achieved with the application of a Gaussian kernel of size , , and variance .
CLAHE [51, 52] is applied on the preprocessed green channel to make vessel structures prominent. The CLAHE operation divides the input image into blocks (size in our case) with the constraint of contrast improvement which is set to 0.01. The clip limit suppresses the noise level and escalates the contrast. The effect of the CLAHE process () along with the green plane is displayed in Figure 3. Histogram-based graphical demonstration of the contrast improvement operations is displayed in Figure 4. An averaging filter of size is applied for smoothness and elimination of anatomical regions (e.g., optic disk, macula, and fovea). symbolizes the output image of the averaging filter. The difference image () is computed for all pixels as follows.
The extra regions of the retinal image are cropped by the utilization of the masking method to extract ROI which reduced the computational complexity. An automatic mask is created from the red band of the retinal image. The reason behind using the red channel for mask construction is that it has a good vessel-background dissimilarity. The automatic mask is created for all datasets because the mask image is not available in some datasets. is thresholded by the MISODATA algorithm. The subsequent procedure is used to compute the threshold level, and the application of MISODATA is shown in Algorithm 1.
The isolated pixels with an area less than pixels in the image () are trimmed and fused with the B-COSFIRE filter segmented image of Block-II by AND-type operation. The physical stats (eccentricity and area) are utilized for the rejection of nonvessel structures. The vessel structures have a higher area and eccentricity as their pixels are linked and having an elongated structure. Figure 5 indicates the graphical results of the , , and .
3.3. Block-II: Vessel Structure-Based Enhancement and Segmentation
In Block-II, the masked image of the Block-I is used as an input for vessel structure-based enhancement and RVS. B-COSFIRE filter  is applied for contrast improvement of vessel structures that will enhance noise also along with the enhancement of vessel structures if the image is not preprocessed. Therefore, the masked image is used for further processing. B-COSFIRE filter produced two results: binary segmented image () and vessel structure-based enhanced image (). The outputs of B-COSFIRE filter are displayed in Figure 6. The AND-type operation is used to combine with that produced output image denoted by . The effect of AND-type operation is shown in Figure 7, which demonstrates that if an alternative operator like OR-type is utilized, it will introduce noise and misclassification. The advantage of using an AND-type operator is exposed in Figure 8 by displaying the visual results with and without using the AND-type operator. The is postprocessed () and multiplied with which is further thresholded to obtain a segmented image (). Pixel-by-pixel multiplication aims at ensuring the detection of vessels at their correct position. The logical OR-type operation is used to produce the final result by coupling of and . The visual effects of the OR-type operator are presented in Figure 9.
The B-COSFIRE filter application includes convolution with difference of Gaussian (DoG) filters, its blurring effects, shifting the blurred responses, and an approximate point-wise weighted geometric mean (GM). A DoG function is given by  where is the standard deviation (SD) of the Gaussian function (GF) that decides the range of the boundary. is manually set SD value of the internal GF, and symbolizes the pixel position of the image. Response of DoG filter with kernal function of has been estimated by convolution, where () denotes pixels intensity distribution. where represents the half-wave rectification process to reject negative values.
In the B-COSFIRE filter, three factors () are used to represent each point , where of the DoG filter, while and denote the polar coordinates. This set of parameters is indicated by , where represents the figure of measured DoG responses. The blurring process indicates the calculation of the extreme limit of the weighted thresholded responses of a DoG filter. The blurring operation is shown as follows. where and are constants. Each DoG-blurred outcome is moved in the reverse direction to by a gap , and as a result, they can merge at the support center of the B-COSFIRE filter. Blurred and shifted responses of the DoG filter is indicated by for every tuple in set . The blurred and shifted response of the DoG filter is defined as where . The output of the filter is shown as GM of all the blurred and shifted DoG responses. where and symbolizes the thresholding response at . Equation (6) represents the AND-type operation that is attained by the B-COSFIRE filter only when all DoG filter responses are larger than zero. The overall step-by-step visual results according to the block diagram (Figure 2) are portrayed in Figure 10.
4. Experimental Outcomes and Deliberation
This section will provide the information about datasets, performance metrics, analysis of experimental results, and time complexity of the proposed method.
The proposed system obtained remarkable results on the freely online available datasets: DRIVE [11, 12], STARE , HRF , and CHASE_DB1 . The magnificence of the framework is justified in terms of assessment with state-of-the-art systems. The datasets used for endorsement of the suggested framework are encapsulated in Table 1. The manually labeled results in all datasets are utilized as a gold standard for performance assessment of the proposed framework.
4.2. Performance Judgment Parameters
The quantitative results are obtained by equating the proposed segmentation’s with the manual segmentation available on each dataset. There are numerous performance standards mentioned in the literature. The performance metrics used for evaluation of the proposed framework are visible in Table 2. Six performance standards (Acc, Sn, Sp, AUC, MCC, and CAL) are selected for the justification of the proposed methodology. The Acc metric tells about the overall valuation of the proposed method. Sn is a measure of the quantity of correctly classified vessel pixels, while Sp is an assessment of the competency of differentiating nonvessel pixels. The AUC is the ratio of Sn and Sp. The MCC [5, 56] is a more appropriate indicator of the accuracy of binary categorization in the case of unbalanced structures. For a comprehensive judgment of the superiority of segmentation, the CAL metric [57, 58] is computed. This metric provides justification based on the properties (connectivity-area-length) of the segmented structures beyond the correctly classified image pixels.
In Table 2, and . The terms TP, TN, FP, and FN denote the true positive (exactly matched vessel pixels), true negative (exactly matched nonvessel pixels), false positive (invalidly predicated vessel pixels), and false negative (invalidly predicated nonvessel pixels), correspondingly.
Let be the extracted final binary image and the corresponding manual segmented image. The considered metric evaluates the following [57, 58]: (i)Connectivity (C): it calculates the fragmentation grade of with respect to the manual segmentation and penalizes fragmented segmentation. It is computed aswhere sums the linked segments while measures the number of vessel pixels in the considered binary image (ii)Area (A): it estimates the intersecting area between and , based on the Jaccard coefficient. Let be a morphological dilation that utilizes a disc structuring element (SE) with a radius of pixels. The magnitude is calculated as follows:The value of controls the tolerance to lines of various sizes. We set (iii)Length (L): it determines the equivalent degree between and by computing the length of the two line networks:where is a skeletonization process and is a morphological dilation with a disc SE of pixel radius. The value of controls the tolerance to dissimilarity of the line tracing output. We set . The final assessment parameter, named CAL, is demarcated as .
4.3. Experimental Results and Inspection
The success of the proposed framework is established by utilizing four freely obtainable datasets: DRIVE, STARE, HRF, and CHASE_DB1 for testing and evaluation. The average performance parameters results in Table 3 are computed by processing 20 test images of the DRIVE and STARE datasets. The performance scores of HRF dataset (15 normal images, 15 DR, and 15 glaucomatous) and CHASE_DB1 are presented in Tables 4 and 5 and Table 6, respectively. The best and worst results within Tables 3–6 are highlighted in italic font. The best and worst image results from each dataset are selected based on their accuracy’s scores. Their pictorial results are shown in Figures 11–14.
The framework performs well on both healthy and pathological images of all selected datasets. The statistical results in Tables 3–6 validates that the suggested system is robust and has the capability to handle the bright lesions images of the STARE dataset, higher resolution images of the HRF dataset, low resolution images of the DRIVE dataset, and left/right eyes images of the CHASE_DB1 dataset. The anatomical structures are also efficiently omitted to avoid any misclassification.
The average statistical results of the proposed framework on all selected datasets are displayed in Table 7, which reflects that the highest mean score of Acc 0.997, Sn 0.814, Sp 0.997, and AUC 0.905 is achieved on the CHASE_DB1 dataset. The lowest FPR is also observed using the same dataset. The highest value of MCC 0.761 and CAL 0.699 is recorded on the HRF dataset. The highest value of each parameter is italicized in the respective column of the Table 7.
The average performance parameter scores of the proposed framework on the DRIVE and STARE datasets are compared with the existing literature in Table 8, while Table 9 shows the result comparison of the HRF and CHASE_DB1 datasets. The Acc, Sn, and Sp results of all techniques in Tables 8 and 9 are acquired from their respective published articles while the AUC result is calculated by using the formula in Table 2.
In Table 8, the obtained results of the framework are compared with 19 unsupervised and 18 supervised existing techniques. The proposed framework achieved the highest Acc result than all unsupervised methods on the DRIVE dataset except Khan et al. , Memari et al.  which is 0.003%, and Fan et al.  which is 0.002% better than ours. The supervised methods Ricci and Perfetti , Lupascu et al. , Wang et al. , Zhu et al. , Thangaraj et al. , Memari et al. , Khowaja et al. , and Fan et al.  show 0.001%, 0.001%, 0.019%, 0.003%, 0.003%, 0.014%, 0.017%, and 0.008% better results than the proposed method, respectively. But some of these methods are only validated on one dataset, which reflects that they are tuned for a single dataset. Some of these methods produce a very low AUC score, which is a trade-off between Sn and Sp. Moreover, supervised methods are computationally very expensive. In the case of the STARE dataset, the framework produced highest Acc scores than all other methods. Table 9 reflects that there are very few techniques that used both HRF and CHASE_DB1 datasets for validation. The Acc score of the framework is higher than both supervised and unsupervised approaches on the HRF and CHASE_DB1 datasets except Soomro et al.  and Fan et al.  which is slightly higher than ours on HRF dataset only. Fan et al.  showed higher Sp value than all other methods on the HRF dataset. The highest Sp value on CHASE_DB1 dataset is obtained by the proposed method. All the other supervised and unsupervised methods acquired a bit greater or equivalent values of Sn and AUC metric on the HRF and CHASE_DB1 datasets as compared to ours.
In Table 10, the MCC and CAL values are recorded by the proposed method and other existing supervised and unsupervised methods. The MCC and CAL values of Chauduri et al. , Niemeijer et al. , Hoover et al. , and B-COSFIRE  are calculated by utilizing their publicly accessible segmented images. The results of Fraz et al. [68, 69], RUSTICO , Yang et al. [70, 71], Vega et al. , FC-CRF , and UP-CRF  are extracted from their published articles.
The average value of MCC attained by the proposed method is higher than all compared unsupervised approaches on the DRIVE, STARE, and HRF datasets, while it is statistically lower than the supervised methods (i.e., FC-CRF  and UP-CRF ) on the DRIVE, STARE, and CHASE_DB1 datasets. The CAL value of the proposed method is observed higher than all supervised and unsupervised methods on the HRF dataset, while it is statistically lower than or equivalent to CAL values of other methods on the DRIVE, STARE, and CHASE_DB1 datasets.
4.3.1. Processing Time
The proposed framework processes a single image in a very short time as equated to other approaches in Table 11. The time values are computed on the single image taken from the DRIVE and STARE datasets.
Vessel extraction is momentous for inspecting abnormalities inside and around the retinal periphery. The retinal vessel segmentation is a challenging task due to the existence of pathologies, unpredictable dimensions and contour of the vessels, nonuniform clarification, and structural inconsistency between subjects. The proposed methodology is consistent, faster, and completely automated for isolation of retinal vascular network. The success of the proposed framework is evidently revealed by the RVS statistics on the DRIVE, STARE, HRF, and CHASE_DB1 datasets. The eradication of anomalous structures prior to enhancement boosted the efficiency of the proposed method. The application of logical operators avoids misclassification of foreground pixels which enhances the accuracy and makes the method robust. Pictorial representation validates that the framework is able to segment both healthy and unhealthy images. Furthermore, the method does not include any hand-marked data by experts for training, which makes it computationally fast.
All the data are fully available within the manuscript without any restriction.
Conflicts of Interest
The authors declare no conflict of interest.
- M. Hashemzadeh and B. A. Azar, “Retinal blood vessel extraction employing effective image features and combination of supervised and unsupervised machine learning methods,” Artificial Intelligence in Medicine, vol. 95, pp. 1–15, 2019.
- P. Bibiloni, M. González-Hidalgo, and S. Massanet, “A real-time fuzzy morphological algorithm for retinal vessel segmentation,” Journal of Real-Time Image Processing, vol. 16, pp. 2337–2350, 2019.
- S. A. Badawi and M. M. Fraz, “Optimizing the trainable b-cosfire filter for retinal blood vessel segmentation,” PeerJ, vol. 6, article e5855, 2018.
- J. Almotiri, K. Elleithy, and A. Elleithy, “Retinal vessels segmentation techniques and algorithms: a survey,” Applied Sciences, vol. 8, no. 2, p. 155, 2018.
- G. Azzopardi, N. Strisciuglio, M. Vento, and N. Petkov, “Trainable COSFIRE filters for vessel delineation with application to retinal images,” Medical Image Analysis, vol. 19, no. 1, pp. 46–57, 2015.
- Z. Yan, X. Yang, and K.-T. Cheng, “Joint segment-level and pixelwise losses for deep learning based retinal vessel segmentation,” IEEE Transactions on Biomedical Engineering, vol. 65, no. 9, pp. 1912–1923, 2018.
- T. A. Soomro, A. J. Afifi, L. Zheng et al., “Deep learning models for retinal blood vessels segmentation: a review,” IEEE Access, vol. 7, pp. 71696–71717, 2019.
- T. A. Soomro, A. J. Afifi, A. A. Shah et al., “Impact of image enhancement technique on cnn model for retinal blood vessels segmentation,” IEEE Access, vol. 7, pp. 158183–158197, 2019.
- T. A. Soomro, J. Gao, Z. Lihong, A. J. Afifi, S. Soomro, and M. Paul, “Retinal blood vessels extraction of challenging images,” in Data Mining. AusDM 2018. Communications in Computer and Information Science, vol 996, R. Islam, Y. S. Koh, Y. Zhao et al., Eds., pp. 347–359, Springer, Singapore, 2018.
- K. B. Khan, A. A. Khaliq, A. Jalil et al., “A review of retinal blood vessels extraction techniques: challenges, taxonomy, and future trends,” Pattern Analysis and Applications, vol. 22, no. 3, pp. 767–802, 2019.
- M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” in Proceedings Volume 5370, Medical Imaging 2004: Image Processing, pp. 648–656, San Diego, CA, USA, May 2004.
- J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501–509, 2004.
- D. Marín, A. Aquino, M. E. Gegundez-Arias, and J. M. Bravo, “A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features,” IEEE Transactions on Medical Imaging, vol. 30, no. 1, pp. 146–158, 2010.
- E. Ricci and R. Perfetti, “Retinal blood vessel segmentation using line operators and support vector classification,” IEEE Transactions on Medical Imaging, vol. 26, no. 10, pp. 1357–1365, 2007.
- C. A. Lupascu, D. Tegolo, and E. Trucco, “FABC: retinal vessel segmentation using AdaBoost,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 5, pp. 1267–1274, 2010.
- S. Wang, Y. Yin, G. Cao, B. Wei, Y. Zheng, and G. Yang, “Hierarchical retinal blood vessel segmentation based on feature and ensemble learning,” Neurocomputing, vol. 149, pp. 708–717, 2015.
- S. Roychowdhury, D. D. Koozekanani, and K. K. Parhi, “Blood vessel segmentation of fundus images by major vessel extraction and subimage classification,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 3, pp. 1118–1128, 2014.
- C. Zhu, B. Zou, R. Zhao et al., “Retinal vessel segmentation in colour fundus images using extreme learning machine,” Computerized Medical Imaging and Graphics, vol. 55, pp. 68–77, 2017.
- S. Tang, T. Lin, J. Yang, J. Fan, D. Ai, and Y. Wang, “Retinal vessel segmentation using supervised classification based on multi-scale vessel filtering and Gabor wavelet,” Journal of Medical Imaging and Health Informatics, vol. 5, no. 7, pp. 1571–1574, 2015.
- S. Aslani and H. Sarnel, “A new supervised retinal vessel segmentation method based on robust hybrid features,” Biomedical Signal Processing and Control, vol. 30, pp. 1–12, 2016.
- D. A. Dharmawan, D. Li, B. P. Ng, and S. Rahardja, “A new hybrid algorithm for retinal vessels segmentation on fundus images,” IEEE Access, vol. 7, pp. 41885–41896, 2019.
- S. Thangaraj, V. Periyasamy, and R. Balaji, “Retinal vessel segmentation using neural network,” IET Image Processing, vol. 12, no. 5, pp. 669–678, 2017.
- N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, “Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier,” PLoS One, vol. 12, no. 12, article e0188939, 2017.
- Z. Yan, X. Yang, and K.-T. T. Cheng, “A three-stage deep learning model for accurate retinal vessel segmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 4, pp. 1427–1436, 2019.
- S. Guo, K. Wang, H. Kang, Y. Zhang, Y. Gao, and T. Li, “BTS-DSN: deeply supervised neural network with short connections for retinal vessel segmentation,” International Journal of Medical Informatics, vol. 126, pp. 105–113, 2019.
- S. A. Khowaja, P. Khuwaja, and I. A. Ismaili, “A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification,” Signal, Image and Video Processing, vol. 13, no. 2, pp. 379–387, 2019.
- B. Zhang, L. Zhang, L. Zhang, and F. Karray, “Retinal vessel extraction by matched filter with first-order derivative of gaussian,” Computers in Biology and Medicine, vol. 40, no. 4, pp. 438–445, 2010.
- A. A. Mudassar and S. Butt, “Extraction of blood vessels in retinal images using four different techniques,” Journal of Medical Engineering, vol. 2013, Article ID 408120, 21 pages, 2013.
- B. Biswal, T. Pooja, and N. B. Subrahmanyam, “Robust retinal blood vessel segmentation using line detectors with multiple masks,” IET Image Processing, vol. 12, no. 3, pp. 389–399, 2017.
- D. A. Dharmawan and B. P. Ng, “A new two-dimensional matched filter based on the modified Chebyshev type I function for retinal vessels detection,” in 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 369–372, Seogwipo, South Korea, July 2017.
- S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989.
- M. A. Khan, T. M. Khan, T. A. Soomro, N. Mir, and J. Gao, “Boosting sensitivity of a retinal vessel segmentation algorithm,” Pattern Analysis and Applications, vol. 22, no. 2, pp. 583–599, 2019.
- W. S. Oliveira, J. V. Teixeira, T. I. Ren, G. D. Cavalcanti, and J. Sijbers, “Unsupervised retinal vessel segmentation using combined filters,” PLoS One, vol. 11, no. 2, article e0149943, 2016.
- N. Strisciuglio, G. Azzopardi, M. Vento, and N. Petkov, “Supervised vessel delineation in retinal fundus images with the automatic selection of B-COSFIRE filters,” Machine Vision and Applications, vol. 27, no. 8, pp. 1137–1149, 2016.
- K. B. Khan, A. A. Khaliq, and M. Shahid, “B-COSFIRE filter and VLM based retinal blood vessels segmentation and denoising,” in 2016 International Conference on Computing, Electronic and Electrical Engineering (ICE Cube), pp. 132–137, Quetta, Pakistan, April 2016.
- G. Azzopardi and N. Petkov, “Automatic detection of vascular bifurcations in segmented retinal images using trainable COSFIRE filters,” Pattern Recognition Letters, vol. 34, no. 8, pp. 922–933, 2013.
- L. Gang, O. Chutatape, and S. M. Krishnan, “Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 2, pp. 168–172, 2002.
- P. Bankhead, C. N. Scholfield, J. G. McGeown, and T. M. Curtis, “Fast retinal vessel detection and measurement using wavelets and edge location refinement,” PLoS One, vol. 7, no. 3, article e32435, 2012.
- K. B. Khan, A. A. Khaliq, and M. Shahid, “A novel fast GLM approach for retinal vascular segmentation and denoising,” Journal of Information Science and Engineering, vol. 33, no. 6, pp. 1611–1627, 2017.
- K. BahadarKhan, A. A. Khaliq, and M. Shahid, “A morphological hessian based approach for retinal blood vessels segmentation and denoising using region based otsu thresholding,” PLoS One, vol. 11, no. 7, article e0158996, 2016.
- L. C. Neto, G. L. Ramalho, J. F. R. Neto, R. M. Veras, and F. N. Medeiros, “An unsupervised coarse-to-fine algorithm for blood vessel segmentation in fundus images,” Expert Systems with Applications, vol. 78, pp. 182–192, 2017.
- F. Zana and J.-C. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Transactions on Image Processing, vol. 10, no. 7, pp. 1010–1019, 2001.
- G. Ayala, T. Leon, and V. Zapater, “Different averages of a fuzzy set with an application to vessel segmentation,” IEEE Transactions on Fuzzy Systems, vol. 13, no. 3, pp. 384–393, 2005.
- M. M. Fraz, S. A. Barman, P. Remagnino et al., “An approach to localize the retinal blood vessels using bit planes and centerline detection,” Computer Methods and Programs in Biomedicine, vol. 108, no. 2, pp. 600–616, 2012.
- M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Medical Image Analysis, vol. 11, no. 1, pp. 47–61, 2007.
- D. J. Farnell, F. Hatfield, P. Knox et al., “Enhancement of blood vessels in digital fundus photographs via the application of multiscale line operators,” Journal of the Franklin Institute, vol. 345, no. 7, pp. 748–765, 2008.
- M. Vlachos and E. Dermatas, “Multi-scale retinal vessel segmentation using line tracking,” Computerized Medical Imaging and Graphics, vol. 34, no. 3, pp. 213–227, 2010.
- R. Annunziata, A. Garzelli, L. Ballerini, A. Mecocci, and E. Trucco, “Leveraging multiscale hessian-based enhancement with a novel exudate inpainting technique for retinal vessel segmentation,” IEEE Journal of Biomedical and Health Informatics, vol. 20, no. 4, pp. 1129–1138, 2016.
- D. Gou, Y. Wei, H. Fu, and N. Yan, “Retinal vessel extraction using dynamic multi-scale matched filtering and dynamic threshold processing based on histogram fitting,” Machine Vision and Applications, vol. 29, no. 4, pp. 655–666, 2018.
- K. Yue, B. Zou, Z. Chen, and Q. Liu, “Improved multi-scale line detection method for retinal blood vessel segmentation,” IET Image Processing, vol. 12, no. 8, pp. 1450–1457, 2018.
- S. M. Pizer, E. P. Amburn, J. D. Austin et al., “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 39, no. 3, pp. 355–368, 1987.
- A. M. Reza, “Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement,” Journal of VLSI Signal Processing Systems for Signal, Image and Video Technology, vol. 38, no. 1, pp. 35–44, 2004.
- A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, 2000.
- J. Odstrcilik, R. Kolar, A. Budai et al., “Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database,” IET Image Processing, vol. 7, no. 4, pp. 373–383, 2013.
- C. G. Owen, A. R. Rudnicka, R. Mullen et al., “Measuring retinal vessel tortuosity in 10-year-old children: validation of the computer-assisted image analysis of the retina (CAIAR) program,” Investigative Ophthalmology & Visual Science, vol. 50, no. 5, pp. 2004–2010, 2009.
- S. Boughorbel, F. Jarray, and M. El-Anbari, “Optimal classifier for imbalanced data using Matthews correlation coefficient metric,” PLoS One, vol. 12, no. 6, article e0177678, 2017.
- M. E. Gegundez-Arias, A. Aquino, J. M. Bravo, and D. Marin, “A function for quality evaluation of retinal vessel segmentations,” IEEE Transactions on Medical Imaging, vol. 31, no. 2, pp. 231–239, 2011.
- N. Strisciuglio, G. Azzopardi, and N. Petkov, “Robust inhibition-augmented operator for delineation of curvilinear structures,” IEEE Transactions on Image Processing, vol. 28, no. 12, pp. 5852–5866, 2019.
- N. Memari, A. R. Ramli, M. I. B. Saripan, S. Mashohor, and M. Moghbel, “Retinal blood vessel segmentation by using matched filtering and fuzzy c-means clustering with integrated level set method for diabetic retinopathy assessment,” Journal of Medical and Biological Engineering, vol. 39, no. 5, pp. 713–731, 2019.
- Z. Fan, J. Lu, C. Wei, H. Huang, X. Cai, and X. Chen, “A hierarchical image matting model for blood vessel segmentation in fundus images,” IEEE Transactions on Image Processing, vol. 28, no. 5, pp. 2367–2377, 2018.
- Z. Fan, J. Mo, and B. Qiu, “Accurate retinal vessel segmentation via octave convolution neural network,” 2019, http://arxiv.org/abs/1906.12193.
- T. A. Soomro, A. J. Afifi, J. Gao, O. Hellwich, L. Zheng, and M. Paul, “Strided fully convolutional neural network for boosting the sensitivity of retinal blood vessels segmentation,” Expert Systems with Applications, vol. 134, pp. 36–52, 2019.
- T. A. Soomro, M. A. Khan, J. Gao, T. M. Khan, and M. Paul, “Contrast normalization steps for increased sensitivity of a retinal image segmentation method,” Signal, Image and Video Processing, vol. 11, no. 8, pp. 1509–1517, 2017.
- T. A. Soomro, T. M. Khan, M. A. Khan, J. Gao, M. Paul, and L. Zheng, “Impact of ICA-based image enhancement technique on retinal blood vessels segmentation,” IEEE Access, vol. 6, pp. 3524–3538, 2018.
- M. A. Khan, T. M. Khan, D. Bailey, and T. A. Soomro, “A generalized multi-scale line-detection method to boost retinal vessel segmentation sensitivity,” Pattern Analysis and Applications, vol. 22, no. 3, pp. 1177–1196, 2019.
- J. Zhang, B. Dashtbozorg, E. Bekkers, J. P. Pluim, R. Duits, and B. M. ter Haar Romeny, “Robust retinal vessel segmentation via locally adaptive derivative frames in orientation scores,” IEEE Transactions on Medical Imaging, vol. 35, no. 12, pp. 2631–2644, 2016.
- L. C. Rodrigues and M. Marengoni, “Segmentation of optic disc and blood vessels in retinal images using wavelets, mathematical morphology and hessian-based multi-scale filtering,” Biomedical Signal Processing and Control, vol. 36, pp. 39–49, 2017.
- M. M. Fraz, P. Remagnino, A. Hoppe et al., “Retinal vessel extraction using firstorder derivative of Gaussian and morphological processing,” in Advances in Visual Computing. ISVC 2011. Lecture Notes in Computer Science, vol 6938, G. Bebis, R. Boyle, B. Parvin et al., Eds., pp. 410–420, Springer, Berlin, Heidelberg, 2011.
- M. M. Fraz, A. Basit, and S. Barman, “Application of morphological bit planes in retinal blood vessel extraction,” Journal of Digital Imaging, vol. 26, no. 2, pp. 274–286, 2013.
- Y. Yang, F. Shao, Z. Fu, and R. Fu, “Discriminative dictionary learning for retinal vessel segmentation using fusion of multiple features,” Signal, Image and Video Processing, vol. 13, pp. 1529–1537, 2019.
- Y. Yang, F. Shao, Z. Fu, and R. Fu, “Blood vessel segmentation of fundus images via cross-modality dictionary learning,” Applied Optics, vol. 57, no. 25, pp. 7287–7295, 2018.
- R. Vega, G. Sanchez-Ante, L. E. Falcon-Morales, H. Sossa, and E. Guevara, “Retinal vessel extraction using lattice neural networks with dendritic processing,” Computers in Biology and Medicine, vol. 58, pp. 20–30, 2015.
- J. I. Orlando, E. Prokofyeva, and M. B. Blaschko, “A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 1, pp. 16–27, 2017.
Copyright © 2020 Khan Bahadar Khan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.