Table of Contents Author Guidelines Submit a Manuscript
International Journal of Biomedical Imaging
Volume 2015 (2015), Article ID 519024, 16 pages
Research Article

Automatic Extraction of Blood Vessels in the Retinal Vascular Tree Using Multiscale Medialness

1Faculty of Sciences, Electronics and Microelectronics Laboratory, Monastir University, 5019 Monastir, Tunisia
2Faculty of Computers and Information, Benha University, Benha 13511, Egypt
3Institute of Mines and Ales, Laboratory of Computer and Production Engineering, 30319 Alès, France
4Imaging Technology Center (CTIM), Las Palmas-Gran Canaria University, 35017 Las Palmas de Gran Canaria, Spain

Received 11 April 2014; Accepted 12 November 2014

Academic Editor: Karen Panetta

Copyright © 2015 Mariem Ben Abdallah et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We propose an algorithm for vessel extraction in retinal images. The first step consists of applying anisotropic diffusion filtering in the initial vessel network in order to restore disconnected vessel lines and eliminate noisy lines. In the second step, a multiscale line-tracking procedure allows detecting all vessels having similar dimensions at a chosen scale. Computing the individual image maps requires different steps. First, a number of points are preselected using the eigenvalues of the Hessian matrix. These points are expected to be near to a vessel axis. Then, for each preselected point, the response map is computed from gradient information of the image at the current scale. Finally, the multiscale image map is derived after combining the individual image maps at different scales (sizes). Two publicly available datasets have been used to test the performance of the suggested method. The main dataset is the STARE project’s dataset and the second one is the DRIVE dataset. The experimental results, applied on the STARE dataset, show a maximum accuracy average of around 94.02%. Also, when performed on the DRIVE database, the maximum accuracy average reaches 91.55%.

1. Introduction

For decades, retinal images are widely used by ophthalmologists for the detection and follow-up of several pathological states [15]. Fundus photographs, also called retinal photography, are captured using special devices called “Charged Coupled Devices” (CCD), which are cameras that show the interior surface of the eye [610]. These images directly provide information about the normal and abnormal features in the retina. The normal features include the optic disk, fovea, and vascular network. There are different kinds of abnormal features caused by diabetic retinopathy (DR) such as microaneurysm, hard exudate, soft exudate, hemorrhage, and neovascularization. An example of retinal images obtained by fundus photography is given in Figure 1, where two retinal images are shown. The first one does not show any DR sign (Figure 1(a)) and the second one demonstrates advanced-DR signs indicated by color arrows (Figure 1(b)). However, the manual detection of blood vessels is very difficult since the blood vessels in these images are complex and have low level contrast [11]. Also, not all the images show signs of diabetic retinopathy. Hence, a manual measurement of the information about blood vessels, such as length, width, tortuosity, and branching pattern, becomes tedious. As a result, it increases the time of diagnosis and decreases the efficiency of ophthalmologists. Therefore, automatic methods for extracting and measuring the vessels in retinal images are needed to save the workload of the ophthalmologists and to assist in characterizing the detected lesions and identifying the false positives [12].

Figure 1: Retinal images [32].

Several works have been proposed for detecting the 2D complex vessel network, such as single scale matched filter [1315], multiscale matched filter [16], adaptive local thresholding [17], single-scale Gabor filters [18], and multiscale Gabor filters [19]. Cinsdikici and Aydin [20] put forward a blood vessel segmentation based on a novel hybrid model of the matched filter and the colony algorithm, which extracts vessels perfectly but the pathological areas can affect the result. In [2123] authors adapted another approach which applied mathematical morphological operators. The suggested method in [21] proved to be a valuable tool for the segmentation of the vascular network in retinal images, where it allowed obtaining a final image with the segmented vessels by iteratively combining the centerline image with the set of images that resulted from the vessel segments’ reconstruction phase using the morphological operator. However, the inconvenience of this method is when a vessel centerline is missing, so the corresponding segmented vessel is normally not included in the final segmentation result. In [22], the authors proved that it was possible to select vessels using shape properties and connectivity, as well as differential properties like curvature. The robustness of the algorithm has been evaluated and tested on eye fundus images and on other images. Gang et al. [24] showed that the Gaussian curve is suitable for modeling the intensity profile of the cross section of the retinal vessels in color fundus images. Based on this elaboration, they proposed the amplitude-modified second-order Gaussian filter for retinal vessel detection, which optimized the matched filter and improved the successfulness of the detection. Staal et al. [25] explained a method for an automated segmentation of vessels in two-dimensional color images. The system was based on extracting image ridges that coincide approximately with vessel centerlines, where the evaluation was done using the accuracy of hard classifications and the values of soft ones. In [26], the authors presented a hybrid method for an efficient segmentation of multiple oriented blood vessels in colour retinal images. The robustness and accuracy of the method demonstrated that it might be useful in a wide range of retinal images even with the presence of lesions in the abnormal images. Dua et al. [27] presented a method for detecting blood vessels, which employs a hierarchical decomposition based on a quad tree decomposition. The algorithm was faster than the existing approaches. In the recent years, alternative approaches for an automated vessel segmentation have used the Hessian-based multiscale detection of curvilinear structures, which has been effective in discerning both large and small vessels [2831].

In this paper, we propose a multiscale response to detect linear structures in 2D images. We will use the formulation, which was suggested in [36, 37]. The presented detection algorithm is divided into two steps. First, we present a flux-based anisotropic diffusion method and apply it to denoise images corrupted by an additive Gaussian noise. In order to extract only the pixels belonging to a vessel region, we use a Gaussian model of the vessels for interpreting the eigenvalues and the eigenvectors of the Hessian matrix. Then, we compute the multiscale response from responses computed at a discrete set of scales. The method has been evaluated using the images of two publicly available databases, the DRIVE database [34] and the STARE database [33]. Prior to analysing fundus images, we have used the green channel alone, since it gives the highest contrast between the vessel and the background.

2. Methodology

2.1. Preprocessing Technique

In the ocular fundus image, edges and local details between heterogeneous regions are the most interesting part for clinicians. Therefore, it is very important to preserve and enhance edges and local fine structures and simultaneously reduce the noise. To reduce the image noise, several approaches have been proposed using techniques such as linear and nonlinear filtering. In linear spatial filtering, such as Gaussian filtering, the content of a pixel is given by the value of the weighted average of its immediate neighbors. This filtering not only reduces the amplitude of noise fluctuations but also degrades sharp details such as lines or edges, so the resulting images appear blurred and diffused [24, 38]. This undesirable effect can be reduced or avoided by designing nonlinear filters. The most common technique is median filtering. With it the value of an output pixel is determined by the median of the neighborhood pixels. This filtering retains edges but results in a loss of resolution by suppressing fine details [39]. In order to perform this task, Perona and Malik (PM) [18] developed an anisotropic diffusion method, a multiscale smoothing, and the edge detection scheme, which were a powerful concept in image processing. The anisotropic diffusion was inspired from the heat diffusion equation by introducing a diffusion function, , which depended upon the norm of the gradient of the image:where and denote gradient operation and image intensity, respectively, is the divergence operator, and denotes the magnitude. The variable represents the spatial coordinate, while the variable is used to enumerate iteration steps in the discrete implementation. Perona and Malik suggested the following diffusion functions:where is a parameter of the norm gradient. In this method of anisotropic diffusion, the norm gradient is used to detect edges or frontiers in the image as a step of intensity discontinuity. To understand the relation between the parameter and the discontinuity value , can be defined as the following product , called the flow diffusion.(i)If , then and we have a filter pass-all.(ii)If , then and we obtain an isotropic diffusion filter (like a Gaussian filter), which is a low-pass filter that attenuates high frequencies.

The one-dimensional discrete implementation of (1) is given bywhere and .

The above result is generalized in -dimensional:if , and .

Up to now, the anisotropic diffusion has been defined as the case where the diffusivity is a scalar function varying with the location in the image. As described earlier, the PM diffusion (Figure 2) limits the smoothing of an image near the pixels with a high gradient magnitude (edge pixels). As the diffusion near an edge is very weak, the noise smoothing near the edge is also small. To address this, diffusions using matrices instead of scalars have been put forward [36, 40, 41], where the anisotropic diffusion allows the diffusion to be different along various directions defined by the local geometry of the structures in the image (Figure 3). Thus, the diffusion on both sides of an edge can be prevented while allowing the diffusion along the edge. This prevents the edge from being smoothed and then being removed during denoising.

Figure 2: PM anisotropic diffusion.
Figure 3: Directional anisotropic diffusion.

The flux of the matrix diffusion (MD) form can be written as where is a positive definite symmetrie matrix that may be adapted to the local image structure, which can be written in terms of its eigenvectors and and eigenvalues and , as follows:Subsequently, the gradient vector field can be written asFollowing the eigenvalues and eigenvectors that we have chosen, different matrix diffusions can be obtained [36, 41]. The diffusion matrix proposed by Weickert et al. [41] had the same eigenvectors as the structure tensor, with eigenvalues that are a function of the norm of the gradient [41, 42]. In our work, we have used a 2D basis which corresponds, respectively, to unit vectors in the directions of the gradient and to the minimal curvature of the regularized (or smoothed) version of the image, which is the image convolved with a Gaussian filter with a standard deviation . This basis is of particular interest in the context of small, elongated structures such as blood vessels, where the minimal curvature holds for the axis direction orthogonal to the gradient. These directions are obtained as two of the eigenvectors of the Hessian matrix of the smoothed image: (further details are described in Section 2.3). Therefore, the eigenvectors are defined as follows: where is the gradient of the image convolved with a Gaussian filter with a standard deviation , gives an estimation of the vessel direction, and is its orthogonal. Also, we have used the eigenvalues in (6) as a diffusion function associated to each vector of the basis depending on the first order derivative of the intensity in this direction, instead of the traditional norm of the smoothed gradient. Furthermore, the diffusion can be decomposed as a sum of diffusions in each direction of the orthogonal basis and the divergence term can be written as [36]where and indicate the first order derivative of the intensity in the direction and the th diffusion function, respectively. Also, can be chosen to be any of the diffusivity functions from the traditional nonhomogeneous isotropic diffusion equation, which depends on the first order derivative of the intensity in this direction, as and , with , being only a diffusing function to allow smoothing in a direction. For further details, the reader could refer to [36, 43].

As in [36], we use a data attachment term with a coefficient which allows a better control of the extent to which the restored image differs from the original image (at ) and of the result of the diffusion process at convergence. The anisotropic diffusion equation becomes

In order to evaluate the denoising effects of the directional anisotropic diffusion , we have added a Gaussian white noise to each of the images in Figure 4. Once the diffusion method is applied to these noisy images, its effectiveness in reducing the noise is got by calculating the peak signal to noise ratio (PSNR) relative to the original image as follows: where and MSE is the mean-squared error which is written as where refers to the original image without noise and is the image after the denoising process.

Figure 4: Original images (a) and the corresponding images with additive Gaussian noise (b); denoised images: best result with GF (c), best result with MF (d), best result with PM filter (e), and best result with directional anisotropic diffusion filter (f).

The higher the PSNR is, the better the effect of the denoising is. Note that this measure does not necessarily imply that an image with a higher PSNR is also more visually gratifying. However, based on our experiments using the three test images with an additive white Gaussian noise, we can draw some observations. First, all the techniques we have tried have several parameters that must be selected carefully to obtain the best results. Since we have a “clean” original image, as well as one with noise, we can use the increment in the PSNR value to guide our choice of the parameters. These parameters and the obtained results are indicated in Tables 1, 2, and 3, where we can observe that for the images corrupted with an additive Gaussian noise, the DAD method performs better than the PM method. It gains a higher PSNR (, , and ) and a smaller MSE (, , and ) than the aforementioned three methods.

Table 1: Parameters and results of different filters for vessel image.
Table 2: Parameters and results of different filters for phantom image.
Table 3: Parameters and results of different filters for Lena image.

Figure 4 represents some of the best results for the different methods (GF, MF, PM, and DAD) on the presented three test images (Vessels, phantom, and Lena). For instance, the results recorded after applying the DAD method show that this latter improves much more the visual rendering of the image compared to other methods. As shown in the images of the first row, a DAD filter can effectively improves the quality of a noisy image and also well enhances edges and preserves more details than other filters. Indeed, the Gaussian filter smooths very strongly the planar areas which causes loss of information regarding the fine structures of the image, and it blurs the image. The Median filter, compared to the Gaussian filter, preserves edges but losses details. Comparing the results of the DAD method to those obtained by the PM diffusion in Figures 5 and 6, we can derive several observations. The denoising of PM diffusion model is sensitive to the value of the conductance parameter , and, therefore, smoothing is performed along ridges but not across a ridge line which causes enhancing the desired ridges as well as the noise. To be compared to the DAD diffusion filter, the diffusivity is a tensor-valued function varying with the location and orientation of edges in an image. So, when this filter is applied to a ridge line smoothing is performed along ridges as across a ridge line while preserving the details.

Figure 5: PM anisotropic diffusion (, ).
Figure 6: Directional anisotropic diffusion (, , ).
2.2. Multiscale Medialness

The general approach of multiscale methods is to choose a range of scales between and (corresponding to and ), which are discretized using a logarithmic scale in order to have more accuracy for low scales and to compute a response for each scale from the initial image [36, 43, 47]. The user specifies the minimal and maximal radius of the vessels to extract. Thus, the computation of the single scale response requires different steps. First, a number of points are preselected using the eigenvalues of the Hessian matrix. These points are expected to be near a vessel axis. Then, for each preselected point, the response is computed at the current scale . The response function uses eigenvectors of the Hessian matrix of the image to define at each point an orientation orthogonal to the axis of a potential vessel that goes through . From this direction, the two points located at an equal distance of in the direction , noted and (Figure 7). The response at is taken as the maximum absolute value, among these two points, of the first derivative of the intensity in the direction:where is the unitary vector of the direction , that is, , and is the gradient of the image at the scale . is obtained by the convolution with the first derivative of a Gaussian function of the standard deviation , where multiplying the derivatives by ensures the scale invariance property and allows comparing the responses obtained from different scales. The gradient vector can be computed by a bilinear interpolation for better accuracy, which is especially needed when looking at small vessels [37, 39].

Figure 7: Representation of vesselness measure calculation (from the point on the central line, is the unit vector perpendicular to the main direction of the vessel and is the current scale).

A vessel of a radius is detected at a scale , so we use the scales corresponding to each radius for the multiscale processing. For a fixed scale , we calculate a response image where is the initial image. Then we calculate the multiscale response for the image which is the maximum of the responses over scales: for each point and a range of scale:This response can be interpreted as an indicator that the point belongs to the center line of a vessel, and can be interpreted as an indicator that the point belongs to the center line of a vessel with a radius . Finally, this response is normalized to give a multiscale response that combines interesting features of each single scale response.

One difficulty with the multiscale approach is that we want to compare the result of a response function at different scales, whereas the intensity and its derivatives are decreasing scale functions. So far, all considerations have been made at a single scale defined by the scale parameter . In his work, about scale space theory, Lindeberg and Fagerström [48] showed the need for a multiscale analysis to take the varying size of objects into account. He also showed the necessity of normalizing the spatial derivatives between different scales. Thus, the normalized vesselness response is obtained by the product of the normalization term and the final vesselness: The parameter can be used to indicate the preference for a particular scale (Figure 8). If it is set to one, no scale is preferred. Besides, the multiscale response is got by selecting the maximum response over a set of different scales between and .

Figure 8: Influence of the normalization parameter on multiscale response; (a) is neutral; (b) favors large scales; finally, (c) favors small scales.
2.3. Extraction of Local Orientations

The proposed model assumes that the intensity profile of the vessels in the cross section is Gaussian (Figure 9). This is a common assumption that it is employed in numerous algorithms [28, 35, 49]. It is also commonly assumed that the intensity does not change much along vessels [4951]. Recently, the Hessian matrix could be used to describe the local shape characteristics and orientation for elongated structures [35, 52]. The eigenvalues of this matrix, when the gradient is weak, express the local variation of the intensity in the direction of the associated eigenvectors. Subsequently, we assume that we want to characterize the dark vessels (low intensity) on a white background (high intensity).

Figure 9: Example of cross sectional profile of blood vessel from gray scale 2D image (the gray intensities are plotted in a 3D view. The , axis is the position of the pixel in the 2D plane of the image, whereas the -axis is the gray value or intensity of the pixel).

Let us denote and as the eigenvalues of the Hessian matrix with and , being their associated eigenvectors (Figure 10). For a linear model with a Gaussian cross section, the vessel direction is defined by the eigenvector with the smallest eigenvalue at the center of the vessel, but it is less determined at the contours because both eigenvalues of the Hessian matrix are zero.

Figure 10: Eigenvalue analysis. (a) vessel cross section; (b) intensity distribution vessel cross section; (c) corresponding eigenvalues.

To summarize, for an ideal linear structure in a 2D image,

In retinal images, some large vessels may have a white line in their center and some elongated and disjoint spots (Figures 11(a), 11(b), and 11(c)); accordingly, the vessels do not invalidate the Gaussian profile assumption. So, such lines are usually lost after the preselection of vessel pixels using the Hessian eigenvalue analysis and classified as background pixels. Therefore, the responses of the gradient magnitude are a task which is of particular importance in improving the detection vessels (Figure 11). The experimental results are demonstrated in Figure 11, which shows hand labeled “truth” images, and segmented images obtained, respectively, by the Hessian eigenvalue analysis and the gradient magnitude. From these results we can deduce that responses based on the gradient magnitude can availably detect white lines as vessel pixels an removes some noise spots.

Figure 11: Retinal blood vessel detection. (a, b, and c) original images [33]; (d–g, e–h, and f–i) subimage of hand labeled image, vessel detection based Hessian eigenvalue analysis, and improved vessel detection with gradient magnitude.

3. Results

In this section, the proposed method has been evaluated on two publicly available retinal image databases, the STARE database [33] and the DRIVE database [25]. The STARE dataset contains twenty fundus colour retinal images, ten of which are from healthy ocular fundi and the other ten are from unhealthy ones. These images are captured by a Topcon TRV-50 fundus camera at a 35 Field Of View (FOV), which have digitized with a 24-bit gray-scale resolution and a size of pixels. The dataset provides two sets of standard hand-labeled segmentations, which are manually segmented by two eye specialists. We create for this dataset a binary mask of the gray channel of the image using a simple threshold technique (Figure 12). We adapt the first eye specialist hand labelled as the ground truth to evaluate our vessel detection technique. The DRIVE dataset consists of 40 fundus ocular images, which have been divided into a training set and a test set by the authors of the database. These images are captured by the Canon CR5 camera at 45 FOV, which have been digitized at 24 bits with a resolution of pixels. The dataset also gives two sets of standard hand-labeled segmentations by two human experts as a 9-ground truth.

Figure 12: Binary mask of STARE project retinal image [33].

The first expert hand labelled segmentation has been adapted as a ground truth to evaluate segmentation techniques on both STARE and DRIVE datasets. It is a common practice to evaluate the performance of retinal vessel segmentation algorithms using receiver operating characteristic (ROC) curves [25, 35]. An ROC curve plots the fraction of pixels correctly classified as vessels, namely, the true positive ra te (TPR), versus the fraction of pixels wrongly classified as vessels, namely, the false positive rate (FPR), by varying the rounding threshold from to (Figure 13). The closer the curve approaches the top left corner, the better the performance of the system. In order to facilitate the comparison with other retinal vessel detection algorithms, we have selected the value of the area under the curve (AUC), which is for an ideal system.

Figure 13: ROC curve of retinal image (06_test.tif) downloaded from DRIVE dataset [34]; (a) original image; (b) segmented image; (c) Roc curve.

To measure the performance of the proposed enhancement filter, we ran our multiscale analysis filter with the following set of parameters:(i), , , and the minimal and maximal radii used in this application are and , discretized using scales;(ii)the parameter set to one to indicate no scale is preferred;(iii)the value is a constant threshold on the norm of gradient on the image;(iv) is the number of iterations for the anisotropic diffusion filter.

The computing time of our algorithm for an image of the STARE database is about seconds, including anisotropic diffusion filtering, and about the same time for the DRIVE database. The implementation of the filter has been done in MATLAB, on a personal computer with a Intel Core Duo processor and 4 GB of memory. In the first experiment, we apply a preprocessing task such as filtering data with an anisotropic diffusion version, cited above, in order to remove or at least reduce noise. The DAD filter denoises the original image by preserving edges and details. To show that the segmentation works better with anisotropic diffusion, Figure 14 presents a segmentation result before and after the application of the anisotropic diffusion scheme. In this figure, we show the improvements provided by the DAD model, which tends to remove noise effects and, unfortunately, smaller objects. So, it preserves efficiently the vessels while making the background more homogeneous.

Figure 14: Effect of anisotropic diffusion. (a) Green channel of the original image downloaded from the STARE project dataset [33]. (b) Subimage of the original image, rescaled for better visualization, (c) segmentation without anisotropic diffusion, and (d) segmentation with anisotropic diffusion, , , and .

On the other hand, for computing the response, it is possible to retain the mean of the two calculated values (the gradient of the two points located at an equal distance from the current point), like in the 3D case proposed by [36], or the minimal calculated value in the 2D case [37]. We prefer retaining the maximum of these two values. Figure 15 shows a synthetic image which consists of 100 × 100 pixels with an 8-bit resolution. We have chosen this image because it contains an object close to the vessel form. The latter figure shows the segmentation results by maximum, average, and minimum response functions. We note that for the case of minimum or average responses, the ring is not completely detected like in the original image, since we can see it has been missing pixels belonging to the edges, in contrast to maximum case where the extraction of the ring is complete. Table 4 presents the calculated with our method for the test set of the STARE database, using the green channel images. As given in the table, the experimental results show that the maximum model performs much better than the average or minimum model .

Table 4: STARE project database [33].
Figure 15: Original synthetic image, maximum response, average response, and minimum response (left to right-top to bottom).

Figure 16 presents the obtained response image of a real retinal image, where four scales have been used for radii of vessels ranging from to 7: . This figure shows that small and large vessels can be better distinguished in the maximum case than the minimum or average ones.

Figure 16: Real angiography image downloaded from DRIVE dataset [34], average response, maximum response, and minimum response (left to right-top to bottom).

Although the contrast is not very high in the original figure (Figure 14(a)), the method detects most vessels, over a large size range. For example, in Figure 17, an image of the retinal tree vasculature is presented, where different responses recorded at increasing scales are represented. The last image shows a quite good performance of the vessel subtraction. Yet Figure 18 proves that it is possible to design a system that approaches the performance of human observers.

Figure 17: Different responses for different scales of Figure 14(a) (top to bottom); the first four images show the vesselness obtained at increasing scales. The last image is the result after the scale selection procedure (normalized image).
Figure 18: An image of a retina [35], the segmented image, and the hand labeled “truth” images (im0077.vk and im0077.ah) (left to right-top to bottom) [33].

In order to evaluate the suggested method, experiment results of the 20-image sets of the STARE database are shown in Table 5. In Table 6, our method is compared to the most recent methods in terms of , , and maximum accuracy average where the maximal accuracy indicates how to extract a binary image that matches the vessel images to a high degree. The accuracy is estimated by the ratio of the sum of the number of correctly classified foreground and background pixels, divided by the total number of pixels in the image. In this latest table, the performance measures of Staal et al. [25], Zhang et al. [14], Mendonça and Campilho [21], Chaudhuri et al. [13], Martinez-Perez et al. [45], and Hoover et al. [35] have been reported by their original papers. In addition, these performance results are the average values for the whole set of images, except the method of Staal [25] which used out of images of the STARE images, among which ten were healthy and nine were unhealthy. Table 5 presents our results on all 20 images in the STARE database, estimated using the hand-labeled images set of the first human expert designated as a ground truth. The estimated experimental results are the average corresponding to an of around and a maximum average accuracy . The results show that our method has a competitive maximum average accuracy value where it performs better than the matched filter [13] and remains close to the others.

Table 5: ROC curve analysis of STARE project database [33].
Table 6: Comparison of vessel segmentation results on STARE project database [33].

The results of the proposed method are also compared with those on twenty images from the DRIVE database, and the result is depicted in Table 7. The hand-labeled images by the first human expert have been used as ground truth. The experimental results show an around of . Also, we have compared the performance of the suggested technique with the sensitivities and specificities of the methods cited in Table 7. It has been found that for the DRIVE database the method has provided a sensitivity of and a specificity of . We have shown that the proposed method performs well with a lower specificity even in the presence of lesions in the abnormal images.

Table 7: Comparison of vessel segmentation results on DRIVE database [34].

4. Conclusion

The purpose of this work is to detect linear structures in real retinal images in order to help the interpretation of the vascular network. We put forward to combining an anisotropic diffusion filter to reduce the image noise with a multiscale response based on the eigenvectors of the Hessian matrix and on the gradient information to extract vessels from retinal images. The main advantage of this technique is its ability to extract large and fine vessels at various image resolutions. Furthermore, the directional anisotropic diffusion plays a vital role in denoising images and in decreasing the difficulty of vessel extraction especially for thin vessels. Our first results show the robustness of the method against noise as well as its applicability to detect blood vessels. The is used as a performance measure, and the values achieved with our algorithm are competitive compared to the existing methods. Therefore, from the experimental results, it can be seen that the number of classified pixels has been slightly lower compared to the other methods using the same database mainly due to the weakness of blood vessels, causing missing vessels, and also because of lesions, resulting in a detection error. Also, the retinal images suffer from nonuniform illumination and have a poor contrast. Thus, to avoid wrong classified pixels or miss classified ones, caused by an occasional false measurement, this system can very well be improved in the future with adding, for instance, some postprocessing tasks to reach more accurate measurement for blood vessels.

Conflict of Interests

The authors declare that there is no conflict of interests regarding to the publication of this paper.


  1. R. Williams, M. Airey, H. Baxter, J. Forrester, T. Kennedy-Martin, and A. Girach, “Epidemiology of diabetic retinopathy and macular oedema: a systematic review,” Eye, vol. 18, no. 10, pp. 963–983, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Gupta and P. Kumar, “Global diabetes landscape-type 2 diabetes mellitus in South Asia: epidemiology, risk factors, and control,” Insulin, vol. 3, no. 2, pp. 78–94, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Malek, M. Ben Abdallah, A. Mansour, and R. Tourki, “Automated optic disc detection in retinal images by applying region-based active aontour model in a variational level set formulation,” in Proceedings of the International Conference on Computer Vision in Remote Sensing (CVRS '12), pp. 39–44, Xiamen, China, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Malek and R. Tourki, “Blood vessels extraction and classification into arteries and veins in retinal images,” in Proceedings of the 10th International Multi-Conference on Systems, Signals & Devices (SSD '13), pp. 1–6, Hammamet, Tunisia, March 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. J. Malek and R. Tourki, “Inertia-based vessel centerline extraction in retinal image,” in Proceedings of the International Conference on Control, Decision and Information Technologies (CoDIT '13), pp. 378–381, Hammamet, Tunisia, May 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Al-Rawi, M. Qutaishat, and M. Arrar, “An improved matched filter for blood vessel detection of digital retinal images,” Computers in Biology and Medicine, vol. 37, no. 2, pp. 262–267, 2007. View at Publisher · View at Google Scholar · View at Scopus
  7. M. E. Tyler, L. D. Hubbard, K. Boydston, and A. J. Pugliese, “Characteristics of digital fundus camera systems affecting tonal resolution in color retinal images,” The Journal of Ophthalmic Photography, vol. 31, no. 1, pp. 1–9, 2009. View at Google Scholar
  8. T. W. Hansen, J. Jeppesen, S. Rasmussen, H. Ibsen, and C. Torp-Pedersen, “Ambulatory blood pressure and mortality: a population-based study,” Hypertension, vol. 45, no. 4, pp. 499–504, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. T. Teng, M. Lefley, and D. Claremont, “Progress towards automated diabetic ocular screening: a review of image analysis and intelligent systems for diabetic retinopathy,” Medical and Biological Engineering and Computing, vol. 40, no. 1, pp. 2–13, 2002. View at Publisher · View at Google Scholar · View at Scopus
  10. M. Ben Abdallah, J. Malek, R. Tourki, J. E. Monreal, and K. Krissian, “Automatic estimation of the noise model in fundus images,” in Proceedings of the 10th International Multi-Conference on Systems, Signals & Devices (SSD '13), pp. 1–5, Hammamet, Tunisia, March 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. N. Patton, T. M. Aslam, T. MacGillivray et al., “Retinal image analysis: concepts, applications and potential,” in Progress in Retinal and Eye Research, vol. 25, pp. 99–127, 2006. View at Google Scholar
  12. T. Walter, J.-C. Klein, P. Massin, and A. Erginay, “A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina,” IEEE Transactions on Medical Imaging, vol. 21, no. 10, pp. 1236–1243, 2002. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Chaudhuri, S. Chatterjee, N. Katz, M. Nelson, and M. Goldbaum, “Detection of blood vessels in retinal images using two-dimensional matched filters,” IEEE Transactions on Medical Imaging, vol. 8, no. 3, pp. 263–269, 1989. View at Publisher · View at Google Scholar · View at Scopus
  14. B. Zhang, L. Zhang, L. Zhang, and F. Karray, “Retinal vessel extraction by matched filter with first-order derivative of Gaussian,” Computers in Biology and Medicine, vol. 40, pp. 438–445, 2010. View at Google Scholar
  15. J. Malek, A. T. Azar, and R. Tourki, “Impact of retinal vascular tortuosity on retinal circulation,” Neural Computing and Applications, 2014. View at Publisher · View at Google Scholar
  16. M. Sofka and C. V. Stewar, “Retinal vessel extraction using multiscale matched filters confidence and edge measures,” Tech. Rep., Department of Computer Science, Rensselaer Polytechnic Institute, 2005. View at Google Scholar
  17. Y. Sato, S. Nakajima, N. Shiraga et al., “Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images,” Medical Image Analysis, vol. 2, no. 2, pp. 143–168, 1998. View at Publisher · View at Google Scholar · View at Scopus
  18. R. M. Rangayyan, F. Oloumi, F. Oloumi, P. Eshghzadeh-Zanjani, and F. J. Ayres, “Detection of blood vessels in the retina using Gabor filters,” in Proceedings of the 20th Canadi an Conference on Electrical and Computer Engineering (CCECE '07), pp. 717–720, Vancouver, Canada, April 2007. View at Publisher · View at Google Scholar
  19. T. Pock, C. Janko, R. Beichel, and H. Bischof, “Multiscale medialness for robust segmentation of 3D tubular structures,” in Proceedings of the 10th Computer Vision with Workshop, The Austrian Science Fund (FWF) under the grants P17066-N04, February 2005.
  20. M. G. Cinsdikici and D. Aydin, “Detection of blood vessels in ophthalmoscope images using MF/ant (matched filter/ant colony) algorithm,” Computer Methods and Programs in Biomedicine, vol. 96, no. 2, pp. 85–95, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. A. M. Mendonça and A. Campilho, “Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1200–1213, 2006. View at Publisher · View at Google Scholar · View at Scopus
  22. F. Zana and J.-C. Klein, “Segmentation of vessel-like patterns using mathematical morphology and curvature evaluation,” IEEE Transactions on Image Processing, vol. 10, no. 7, pp. 1010–1019, 2001. View at Publisher · View at Google Scholar · View at Scopus
  23. M. M. Fraz, M. Y. Javed, and A. Basit, “Evaluation of retinal vessel segmentation methodologies based on combination of vessel centerlines and morphological processing,” in Proceedings of the 4th IEEE International Conference on Emerging Technologies (ICET '08), pp. 232–236, Rawalpindi, Pakistan, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. L. Gang, O. Chutatape, and S. M. Krishnan, “Detection and measurement of retinal vessels in fundus images using amplitude modified second-order Gaussian filter,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 2, pp. 168–172, 2002. View at Publisher · View at Google Scholar · View at Scopus
  25. J. Staal, M. D. Abràmoff, M. Niemeijer, M. A. Viergever, and B. Van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501–509, 2004. View at Publisher · View at Google Scholar · View at Scopus
  26. P. C. Siddalingaswamy, “Automatic detection of multiple oriented blood vessels in retinal images,” Journal of Biomedical Science and Engineering, vol. 3, pp. 101–107, 2010. View at Publisher · View at Google Scholar
  27. S. Dua, N. Kandiraju, and H. W. Thompson, “Design and implementation of a unique blood-vessel detection algorithm towards early diagnosis of diabetic retinopathy,” in Proceedings of the IEEE International Conference on Information Technology: Coding and Computing, vol. 1, pp. 26–31, April 2005. View at Publisher · View at Google Scholar
  28. A. Frangi, Three-dimensional model -based analysis of vascular and cardiac images [Ph.D. thesis], Utrecht University, Utrecht, The Netherlands, 2001.
  29. C. Lorenz, I.-C. Carlsen, T. M. Buzug, C. Fassnacht, and J. Weese, “Multi scale line segmentation with automatic estimation of width, contrast and tangential direction in 2d and 3d medical images,” in Proceedings of the 1st Joint Conference Computer Vision, Virtual Reality and Robotics in Medicine and Medical Robotics and Computer-Assisted Surgery (CVRMed-MRCAS '97), pp. 233–242, Springer, London, UK, 1997.
  30. J. L. Federman, P. Gouras, H. Schubert et al., “Systemic diseases,” in Retina and Vitreous: Textbook of Ophthalmology, S. M. Podos and M. Yano, Eds., vol. 9, pp. 7–24, Mosby, St. Louis, Mo, USA, 1994. View at Google Scholar
  31. W. Huang, Automatic Detection and Quantification of Blood Vessels in the Vicinity of the Optic Disc in Digital Retinal Images, University of Waikato, 2006.
  33. A. Hoover, “STARE database,”
  34. M. Niemeijer and B. van Ginneken, 2002,
  35. A. Hoover, V. Kouznetsova, and M. Goldbaum, “Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response,” IEEE Transactions on Medical Imaging, vol. 19, no. 3, pp. 203–210, 2000. View at Publisher · View at Google Scholar
  36. K. Krissian, “Flux-based anisotropic diffusion applied to enhancement of 3-D angiogram,” IEEE Transactions on Medical Imaging, vol. 21, no. 11, pp. 1440–1442, 2002. View at Publisher · View at Google Scholar · View at Scopus
  37. C. Blondel, Modelisation 3D et 3D + t des arteres coronaires a partir de sequences rotationnelles de projections rayons X [Ph.D. thesis], University of Nice Sophia Antipolis, Nice, France, 2004.
  38. M. Ben Abdallah, J. Malek, R. Tourki, and K. Krissian, “Restoration of retinal images using anisotropic diffusion like algorithms,” in Proceedings of the International Conference on Computer Vision in Remote Sensing (CVRS '12), pp. 116–121, Xiamen, China, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. K. Krissian, G. Malandain, N. Ayache, R. Vaillant, and Y. Trousset, “Model-based detection of tubular structures in 3D images,” Computer Vision and Image Understanding, vol. 80, no. 2, pp. 130–171, 2000. View at Publisher · View at Google Scholar · View at Scopus
  40. G.-H. Cottet and L. Germain, “Image processing through reaction combined with nonlinear diffusion,” Mathematics of Computation, vol. 61, no. 204, pp. 659–673, 1993. View at Publisher · View at Google Scholar · View at MathSciNet
  41. J. Weickert, “Scale-space properties of nonlinear diffusion filtering with a diffusion tensor,” Tech. Rep. 110, University of Kaiserslautern, Kaiserslautern, Germany, 1994. View at Google Scholar
  42. J. Bigun, G. H. Granlund, and J. Wiklund, “Multidimensional orientation estimation with applications to texture analysis and optical flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 8, pp. 775–790, 1991. View at Publisher · View at Google Scholar · View at Scopus
  43. M. Ben Abdallah, M. Jihene, K. Krissian, and R. Tourki, “An automated vessel segmentation of retinal images using multi-scale vesselness,” in Proceedings of the 8th International Multi-Conference on Systems, Signals & Devices, IEEE, 2011.
  44. J. V. B. Soares, J. J. G. Leandro, R. M. Cesar Jr., H. F. Jelinek, and M. J. Cree, “Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification,” IEEE Transactions on Medical Imaging, vol. 25, no. 9, pp. 1214–1222, 2006. View at Publisher · View at Google Scholar · View at Scopus
  45. M. E. Martinez-Perez, A. D. Hughes, S. A. Thom, A. A. Bharath, and K. H. Parker, “Segmentation of blood vessels from red-free and fluorescein retinal images,” Medical Image Analysis, vol. 11, no. 1, pp. 47–61, 2007. View at Publisher · View at Google Scholar · View at Scopus
  46. M. Niemeijer, J. Staal, B. van Ginneken, M. Loog, and M. D. Abramoff, “Comparative study of retinal vessel segmentation methods on a new publicly available database,” in Proceedings of the SPIE Medical Imaging, M. Fitzpatrick and M. Sonka, Eds., vol. 5370, pp. 648–656, 2004. View at Google Scholar
  47. X. Jiang and D. Mojon, “Adaptive local thresholding by verification-based multithreshold probing with application to vessel detection in retinal images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 1, pp. 131–137, 2003. View at Publisher · View at Google Scholar · View at Scopus
  48. T. Lindeberg and D. Fagerström, “Scale-space with casual time direction,” in Proceedings of the 4th European Conference on Computer Vision (ECCV '96), pp. 229–240, Cambridge, UK, April 1996.
  49. W. Changhua, G. Agam, and P. Stanchev, “A general framework for vessel segmentation in retinal images,” in Proceedings of the International Symposium on Computational Intelligence in Robotics and Automation (CIRA '07), June 2007.
  50. X. Qian, M. P. Brennan, D. P. Dione et al., “A non-parametric vessel detection method for complex vascular structures,” Medical Image Analysis, vol. 13, no. 1, pp. 49–61, 2008. View at Publisher · View at Google Scholar
  51. L. Wang, A. Bhalerao, and R. Wilson, “Analysis of retinal vasculature using a multiresolution hermite model,” IEEE Transactions on Medical Imaging, vol. 26, no. 2, pp. 137–152, 2007. View at Publisher · View at Google Scholar · View at Scopus
  52. A. P. Witkin, “Scale-space filtering: a new apporach to multi scale description,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '84), pp. 150–153, March 1984.