Table of Contents Author Guidelines Submit a Manuscript
Journal of Healthcare Engineering
Volume 2017, Article ID 5953621, 14 pages
https://doi.org/10.1155/2017/5953621
Research Article

Automatic CDR Estimation for Early Glaucoma Diagnosis

1Biomedical Engineering and Telemedicine Research Group, University of Cádiz, Puerto Real, Cádiz, Spain
2Signal Theory and Communication Department, University of Seville, Seville, Spain
3Ophthalmology Unit, Puerta del Mar Hospital, Cádiz, Spain

Correspondence should be addressed to M. A. Fernandez-Granero; se.acu@zednanref.am and I. Fondón; se.su@feneri

Received 1 June 2017; Revised 9 September 2017; Accepted 24 September 2017; Published 27 November 2017

Academic Editor: Andreas Maier

Copyright © 2017 M. A. Fernandez-Granero et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Glaucoma is a degenerative disease that constitutes the second cause of blindness in developed countries. Although it cannot be cured, its progression can be prevented through early diagnosis. In this paper, we propose a new algorithm for automatic glaucoma diagnosis based on retinal colour images. We focus on capturing the inherent colour changes of optic disc (OD) and cup borders by computing several colour derivatives in CIE Lab colour space with CIE94 colour distance. In addition, we consider spatial information retaining these colour derivatives and the original CIE Lab values of the pixel and adding other characteristics such as its distance to the OD centre. The proposed strategy is robust due to a simple structure that does not need neither initial segmentation nor removal of the vascular tree or detection of vessel bends. The method has been extensively validated with two datasets (one public and one private), each one comprising 60 images of high variability of appearances. Achieved class-wise-averaged accuracy of 95.02% and 81.19% demonstrates that this automated approach could support physicians in the diagnosis of glaucoma in its early stage, and therefore, it could be seen as an opportunity for developing low-cost solutions for mass screening programs.

1. Introduction

The World Health Organization (WHO) has reported an increase of the number of patients suffering from eye diseases due to the aging of world population [1]. Among all of them, glaucoma is the second leading cause of blindness in developed countries. This disease is considered as a major public health concern, and its prevalence will probably continue to increase as life expectancy continues to rise [2].

Glaucoma describes a group of ocular disorders with a common characteristic: the progressive loss of nerve fibers in the retina. Although it cannot be cured, its associated blindness may be prevented through early diagnosis. However, glaucoma is known as the “silent theft of sight” in the sense that it presents no symptoms until vision is already lost. Glaucoma should be diagnosed early in the disease course in order to identify patients that require treatment to maintain quality of life [2].

The loss of optical fibers due to glaucoma progression is associated with a corresponding change in the optic disc (OD). Therefore, the empty space within the OD and the so-called cup is subsequently enlarged. That is the reason why the cup to disc ratio (CDR), defined as the relation between the OD and cup area, increases with the progression of the disease. OD appearance is, therefore, critical in glaucoma diagnosis, and images of the retina are mandatory for a correct disease assessment.

Several eye imaging technologies have been developed during the last 160 years [3]. Heidelberg retina tomograph (HRT) and optical coherence tomography (OCT) along with angiography are widely used in the diagnosis and follow-up of patients with different ocular diseases such as diabetic retinopathy or macular degeneration [3, 4]. Although OCT provides the best representation of the retina, devices based on this technique are highly expensive and they cannot be afforded by local medical centres [5]. As fundus imaging is the most established way of retinal imaging in primary care settings, an automatic glaucoma diagnosis system based on fundus images could be deployed having the potential for early disease diagnosis [6].

Nevertheless, the use of fundus imaging techniques alone could not be enough for a mass screening programs. The lack of specialists in local health centres makes the inspection of every patient’s retinal image unaffordable. Moreover, the amount of information would exceed the limit of clinicians’ ability to fully utilize it [3]. Under these circumstances, the use of automated imaging classification as a triage test may prove to be cost effective [7].

Image-based glaucoma diagnosis is performed mainly with CDR measurement, that is, the computation of the ratio of OD and cup region areas. Currently, this calculus is performed on the basis of manually delineated areas over the retinal fundus image. The skilled human grader must carefully draw the region with an image editor software, and afterwards, the ratio of the areas is calculated. This method is time consuming and exhausting. A little saving in time is provided by some acquisition of devices that offer the possibility of extracting the OD and cup region by adjusting an ellipse to four points that should be introduced by the expert. Instead of carefully marking the whole region, the physician should only mark four reference points. However, assuming that the area is elliptical and basing the adjustment on four points makes the system a little bit faster, that is, about eight minutes per eye under the Klein protocol [8], but less accurate. It seems clear that the medical community needs an automatic method for CDR computation. A computer-aided diagnosis tool (CAD) integrating such an algorithm could avoid problems of inaccurate results while saving time and costs.

For automatic CDR estimation, the OD and cup regions have to be segmented based on their characteristic appearance (Figure 1). However, it must be noticed that the shape, size, and colour variations on retinal images across a population are expected to be high [3], making OD and cup segmentation a challenging task (Figure 2). Generally speaking, OD is an extremely intense region inside the fundus image and can be identified from features such as the following [9]: (i)Shape: the OD is roughly circular.(ii)Colour: the OD usually presents hues ranging from orange to yellow.(iii)Brightness: the OD presents a brightness value that is usually higher than the rest of the retinal image.(iv)Size: the OD area is usually less than 1/7 of the total eye.

Figure 1: The OD and cup as seen on a typical retinal fundus image. The OD is presented as an almost circular region with a colour ranging from orange to yellow. The cup is the brightest region within it, with a diffuse border only distinguishable by vessel bends.
Figure 2: The OD presents a general appearance making it suitable for its automatic detection. However, there is a high variability among population: (a) clear colour change and red hue, well-defined border; (b) subtle colour change and red hue, diffuse border; (c) subtle colour change and pale yellow fuzzy border; and (d) bright yellow diffuse border presence of peripapillary atrophy.

The optic cup is immersed within the OD region. It usually presents a roughly circular shape and a bright yellowish colour as can be appreciated in Figure 1. However, it is well known that its segmentation from retinal fundus images is arduous, due to the lack of depth information, which is not available in the 2D images. Furthermore, the presence of ill-defined and inhomogeneous optic cup boundaries (see Figure 2) makes the problem even more difficult [10].

From the abovementioned characteristics of the OD and cup, colour is the most relevant when trying to isolate both areas [4]. Consequently, the proper colour space selection is crucial for the eventual success of the algorithm. Conversely, it is a general trend to only consider the illumination information of pixels [1121]. Two facts are presented by the majority of papers to support their selection: (1)The use of colour images involves higher complexity due to their three-dimensional nature.(2)Grey level images allow using well-known algorithms [22].

Some of the methods using grey scale images select only one colour plane of the three available in any colour image representation (RGB, HIS, etc.). Most of these articles claim that the OD can be easily discriminated from the G channel when analysing the RGB components of the image [9, 2333].

Frequently, blood vessels need to be previously inpainted to prevent an interference with the OD segmentation algorithm [23, 27, 33]. Likewise, there is a common trend of using basic image processing techniques such as histogram thresholding alone [30] or combined with other methods [2325, 3134].

Equally important is the use of the R colour plane [3538], the V channel from HSV, [5, 39, 40], or the M colour channel of CMY [41]. Only a minority of methods relies on the luminance coordinate (L) of CIE Lab colour space [42].

Several authors prefer the use of more than two colour planes usually from RGB colour space. The processing is performed separately, as if different grey level images were available [8, 10, 43, 44].

The abovementioned techniques are based on grey level image processing. From a computational point of view, the use of scalar values may reduce processing time. Nevertheless, the correlation among different colour planes is neglected by these approaches, and hence, some useful features may be lost. Only across the integration of the information in all channels, a colour image can be effectively segmented.

The use of the full colour representation on automatic diagnosis assessment is important. However, equally significant is the role of colour perception in object recognition and scene understanding both for humans and intelligent vision systems [45, 46]. Among all the possibilities for colour image representation, the use of a uniform colour space, that is, a representation of the image where colour distances are correlated to perceptive differences could benefit the quality of the results. Some authors have used CIE Lab colour space due to its uniformity and the possibility of using advance colour metrics [4749]. Authors in [50] used JCh colour space from the CIECAM02 colour appearance model for OD extraction. Although these methods pretend to take advantage of the complete colour information available while being as close to human perception as possible, OD segmentation is a problem yet to be solved. For instance, authors in [50] process only the grey level plane J. Methods illustrated in [34] and [48] would not work in images where colour differences between OD and background are not significant. The computation of colour derivatives, only in certain pixels located in a radius centred on the OD, was presented in [47]. In such approach, the final obtained border is dependent on the separation of the radial lines.

Regarding cup segmentation, the same limitations about colour spaces and human perception could be applied. It is important to note that the cup area is more difficult to segment than OD due to vessels, border asymmetry, and colour variability. It usually happens that images present no bright yellowish area at all but the cup is still there. In these cases, the cup edge is dictated by vessel bends. For this reason, the majority of approaches presented in the past may not give accurate results when dealing with complex image databases [5, 8, 10, 2628, 3034, 40, 41, 44, 49, 5154].

Although the abovementioned techniques present relevant results, there are still some weak points that should be addressed: (1)The use of colour information is usually limited to separately processing each colour plane. However, retinal fundus images are vector-valued colour images and therefore, their analysis in a scalar fashion could add some errors to the process.(2)Medical image perception is not addressed by the majority of the approaches. The use of uniform colour spaces or advanced colour distances is limited.(3)The complexity of the proposed techniques is high making the tools unconnected and the method inelegant.(4)The proposed methods rely on vessel detection and inpainting frequently. In many approaches, vessel bends must be computed as well. The errors in this initial stage will propagate to the rest of the algorithm.(5)The methods are designed and tested in the same image databases, with a limited number of images. These databases are private in most cases. The gold standard is usually not available. Therefore, the real quality of the tool cannot be accessed.

To address these issues, the present technique has the following key points: (1)The method is simple. It has three stages only.(2)It does not rely on the segmentation, inpainting, or detection of vessel bends or other retinal image structures.(3)The method makes use of a uniform colour model along with a colour perception-adapted distance image.(4)The technique has been extensively validated. It has been designed on public image databases. The result of the test on these databases is presented. Once the tool has been trained, a second experiment is performed using a completely different database.(5)As glaucoma diagnosis on retinal fundus images is currently performed mainly by manual inspection, freehand, or ellipse fitted, we do not present only the segmentations of these areas but also the CDR measurements that are automatically calculated. We compare the results of the technique with the gold standard provided by experts with both of the approximation methods generally used.

2. Materials and Methods

2.1. Image Database

We constructed two image databases, namely, Dataset1 and Dataset2, each containing 60 retinal fundus images. These 120 images spanned a great diversity of retinal content. The key point on selecting the images was that they needed to be representative of the content that the algorithm will encounter on its practical use.

Therefore, we explored seven publicly available databases [5561] to create Dataset1. Sixty images that offer a wide range of appearances, illumination, and colours were selected as shown in Figure 3 (a detailed list of images can be accessed in the supplementary material available online at https://doi.org/10.1155/2017/5953621). The image database comprised healthy and unhealthy images of patients suffering from glaucoma in some cases and also diabetic retinopathy. Two experts performed manual annotation of all of the retinographies since public gold standard was not available.

Figure 3: Dataset1 comprises a wide range of OD and cup appearances due to their different nature, population, and acquisition devices.

Dataset2 included 60 images from the Surgery Department and Glaucoma Unit of the University Hospital Puerta del Mar of Cadiz (Spain). Images were annotated by two experts and were used as an independent test set. The complexity of the images of Dataset2 was high, as can be appreciated in Figure 4, including challenging cases with no visible cup, presence of abnormalities, or diffuse borders. In Dataset2, two gold standards were used: (a)The first gold standard consisted of the freehand drawing on the retinal image. It was a tedious and time-consuming task due to the difficulty of selecting the precise border of the OD and cup regions.(b)As a second gold standard, the OCT software performed ellipse fitting. Experts marked up four points for both regions. It must not be confused with image processing-based ellipse fitting. The OCT software only computes the equation of an ellipse based on the four manually marked points. No image information is taking into account.

Figure 4: Dataset2 is a private database compounded by retinal fundus images acquired by the same device. The overall complexity is high due to the presence of many different appearances: fuzzy edges, subtle colour changes, atrophies, and so forth.

Once the databases were built, a region of interest (ROI) was automatically selected in order to reduce computational time [11, 17, 18, 21, 27]. The ROI area corresponded to a region with the following characteristics: (i)Square shape(ii)Centred on the OD(iii)With an area equal to 1/7 of the retina size

It must be noticed that any of the methods presented in literature about OD location could be used in this step [947]. However, the contribution of the proposed technique relates on OD and cup detection and not OD localization. In order to effectively evaluate the performance of the technique not disturbing it with possible error propagation, we have manually input OD centres for all of the images.

2.2. Vector-Based Colour Derivatives

Image derivatives were used in order to identify OD and cup boundaries, due to their capability of capturing changes on a certain pixel neighbourhood. Derivatives can be computed in several directions by rotating the kernel before performing the convolution.

Retinographies are colour images. Consequently, the edges should be found by looking for colour changes. Edge detection in colour images is usually performed by applying the derivative kernels to the three colour channels independently and then by combining the results. These kinds of methods do not take into account the correlation among colour channels, and, therefore, they tend to miss edges that have the same strength but in opposite directions in two of their colour components [62]. In an attempt to avoid this issue, we have adopted the technique proposed in [62], where colour images are treated as two dimensional (pixel location), three-channel (colour planes) vector fields. Then, they can be characterized by a discrete integer function I(x,y) that can be written as follows: where , and correspond to colour channels and to pixels’ locations. For instance, in RGB colour space

The magnitude of maximum variation at pixel with an orientation of 0° is defined as follows [62]: where if Euclidean distance (ΔE) is used, is defined as follows:

The quantities and are the convolution kernels whose outputs are vectors corresponding to the local average colours. Let the edge masks (k) be and the image neighbourhood (W)

Each , is a vector with three components corresponding to each colour plane. Then,

To improve adaptation to human perception of the method, instead of using Euclidean distance formula as in (3), the technique presented in [63] was followed. The colour space CIE Lab was selected due to its uniformity. For the colour distance formula, CIE94 was adopted instead of Euclidean, due to its best performance and lower computational time when compared to other perception-adapted colour differences such as CIEDE2000. Then,

Equation (9) can be rewritten as that is, each local average colour will have its three-colour components.

Following the same procedure, (10) can be rewritten as follows: Subsequently, where is the CIE94 colour distance between the corresponding vectors:

For the case of ,

The magnitude of maximum variation can be computed using (4). In the case that we want to calculate colour changes in other directions rather than vertical (0°), we only need to rotate the mask on (7) to the desired orientation.

As stated on the introduction, the OD and cup regions present characteristic appearances directly related to colour. However, the absolute colour value of a pixel should not be trusted. Figures 25 show how colour variability is too high to determine a specific colour range for every OD and cup area. On the contrary, every OD and cup border present a change of colour when compared to their surrounding pixels. In other words, absolute colour values are not discriminative but their relative changes on the retina can be (Figure 6).

Figure 5: Cup region segmentation is a challenging task due to its wide range of appearances: (a) bright yellow, well-defined border and small size, (b) no perceptible colour change, (c) pale yellow, well-defined border and medium size, and (d) pale yellow, diffuse border and large size.
Figure 6: Colour changes represented by gradient arrows marked in blue offer the necessary information for OD and cup segmentation.

In the present approach, we have taken advantage of this relative change of colour to detect pixels belonging to OD and cup edges. We computed Sobel vector-based colour derivatives in 25 orientations (from 0° to 360° with a separation interval of 15°) for every pixel within the image. To implement the Sobel operator, the mask of (10) at 0° was while for 45°, the mask was

Equation (2) was evaluated to measure the maximum colour variation for every pixel and orientation. Figure 7 shows some examples where each pixel value corresponds to its B value on that direction.

Figure 7: Vector-based colour derivatives for three of the computed orientations: 0°, 45°, and 270°.
2.3. Classification Based on Bagged Trees

OD detection and cup detection were achieved by classifying each pixel on the image regarding its vector-based colour derivatives and its distance to the OD centre.

Several classifiers were used. The best performance was obtained with a bagged trees classifier, as it will be explained in Results. Therefore, the bagged trees classifiers will be described briefly in this section.

The idea of bagging is to obtain the best model by combining the results of multiple weak classifiers into a single and strong one [64]. In a bagged tree, the basic classifier is a decision tree.

Data is divided into T training sets of size n, and T decision trees are trained with those sets, each one trying to fit the model. The T decisions are finally combined with a majority voting rule. Bagging leads to improvements for unstable procedures [65].

3. Results and Discussion

The discriminative power of colour derivatives when the OD and cup are detected has been tested. To that purpose, a feature vector must be built for every pixel on each ROI. This feature vector is the input for the classifier that will assign a probability value to that pixel regarding its suitability of belonging to the OD, cup, or background. The feature vector contains all of the colour variation values for the 25 orientations, original CIE Lab values of the pixel, its distance, and the angle regarding the centre of the OD and its position. These 32 features combine the a priori colour and spatial knowledge about the OD and the cup.

The method has been extensively validated and tested with two experiments carried out on both of the databases detailed in Section 2.

3.1. Image Database Dataset1

This database is composed of 60 retinal fundus images from six different public databases. Dataset1 images present a variety of appearances, illumination conditions, retinal structures, and so forth. Consequently, it is expected that an algorithm developed using this database will be highly robust. A total of six classifiers were trained and validated: (i)Simple tree (ST).(ii)Bagged tree (BT). This classifier was introduced in Section 2.3.(iii)Complex tree (CT). This is a decision tree with many leaves that makes many fine distinctions between classes.(iv)Linear discriminant (LD). Decisions are made by estimating, with Bayes theorem, the probability that a new set of samples belongs to each class.(v)Quadratic discriminant (QD). This classifier is an extension of LD where heterogeneous variance-covariance matrices are considered.(vi)kNN Euclidean. kNN does not use a model to fit the training data and subsequently classify the new samples [66].

Model selection was performed using cross-validation. For each classifier, the accuracy of the best parameter setting was compared. Dataset1 was used as a fold cross-validation set, which was subsequently and repeatedly divided into train and validation sets. For this internal validation, 10-fold cross-validation was performed. In each of the tenfolds, the classifier was rebuilt from scratch. This entire procedure is repeated 10 times. The reported accuracy was the average over the accuracies for each fold.

The ability of the algorithm to perform an accurate classification of OD, cup, and background pixels was measured by its sensitivity while its ability to determine the pixels that do not belong to each of the three classes was expressed by its specificity. Positive predictive value (PPV) and negative predictive value (NPV) were also computed to give an idea of the proportions of true positives and true negatives. Table 1 shows the performance metrics that were used in this study to evaluate the use of vector-based colour derivatives in combination with each of the six classifiers.

Table 1: Model performance evaluation under Dataset1 database. The three classes are background (class 1), optic disc (class 2), and cup (class 3). (a) Simple tree, (b) bagged trees, (c) KNN, (d) complex tree, (e) linear discriminant, and (f) quadratic discriminant.

The analysis of Table 1 reveals that BT classifier, which showed a class-wise-averaged accuracy of 95.02%, provides the best combination of classifier and vector-based colour derivatives. This classifier showed a specificity of 99.23% for the OD class and 99.80% for the cup class, while preserving a sensitivity of 91.75% for the OD and 90.63% for the cup. PPV values were 90.74% for the OD class and 94.83% for the cup class. NPV was 99.32% and 99.62% for the OD and cup classes, respectively.

3.2. Image Database Dataset2

This image database comprised 60 images provided by the University Hospital Puerta del Mar, Cadiz, Spain. Images were acquired in a routine screening process with the same acquisition device.

External validation establishes models’ transportability and generalizability [67]. In this study, independent Dataset 2 was used to externally validate the fully trained classifier previously selected using cross-validation in Dataset1. All samples in Dataset1 were used to train the BT model. Two experts made two annotations for each of the images on the database: (i)Freehand: Two glaucoma specialists meticulously annotated the exact border of the OD and the cup. This process was time consuming although constituting the most exact reference for error calculation. This annotation was considered the gold standard for our experiments.(ii)Ellipse based: The OD and cup edges were obtained by building an ellipse that contained four points manually marked by the experts. This strategy showed a lower computational time. However, the obtained border was not as accurate as that in the freehand approach.

3.2.1. Quality of the Segmentation

A first experiment was performed to effectively know whether a postprocessing step would eventually improve the quality of the results. To this purpose, we took the probabilities for each pixel of belonging to each class and, from this information, we built two probability images (see Figure 8) that are the basis for the final postprocessing. This last step was performed in two ways: (i)Active contour (AC) based: The probability images that corresponded to the OD and the cup were thresholded to obtain an initial mask. The threshold was automatically obtained with Otsu’s technique. The final OD and cup were segmented on the base of this binary image and evolving on the corresponding probability values. The Chan-Vese model [68] with a number of iterations experimentally fixed to 20 was used for AC.(ii)AC and ellipse fitting: This postprocessing consisted on automatically adapting an ellipse to the boundary pixels obtained with the previous postprocessing.

Figure 8: (a) Original ROI image, (b) OD probability image, and (c) cup probability image.

These two postprocessing steps were added to emulate experts’ segmentations, which were in general smoother than our algorithm’s results (Figure 9(a)). AC provides the softness required while preserving the shapes (Figure 9(b)). The ellipse-fitted result was intended to better compare with manually marked ellipses provided with the database (Figure 9(c)).

Figure 9: Result images for the original ROI of Figure 8. White colour corresponds to the cup, grey colour to the OD, and black colour to the background. Labels assigned by the classifier: (a) without postprocessing, (b) with AC, and (c) with AC and ellipse fitting.

As shown in Table 2, adding the postprocessing step after the classifier output improved class-wise-averaged accuracy from 79.66% to 81.95% and 81.19% using AC and AC together with ellipse fitting, respectively.

Table 2: Bagged tree model performance evaluation under Dataset2 database. The three classes are background (class 1), optic disc (class 2), and cup (class 3). (a) Without postprocessing, (b) smoothed with AC, and (c) smoothed and ellipse fitting.

Visually, Figure 10 illustrates that, taking freehand annotations as the gold standard, the proposed method was able to detect the OD and cup regions even when colour differences were subtle, blood vessels were disturbing the edges, or peripapillary atrophy was present in the retinal images. Ellipse-based manual annotation did not extract the precise shape of the areas, and therefore, the results were not as accurate as expected.

Figure 10: OD and cup segmentation results. The second (automatic result without ellipse fitting) and third columns (automatic result with ellipse fitting) present the OD and the cup in green.
3.2.2. Quality of CDR Measurement

CDR is widely adopted as the standard measure for glaucoma detection. Three methods for CDR calculation have been proposed [28]. The first two methods are based on the vertical and horizontal diameters of the cup and disc regions, VCDR and HCDR, respectively. The third strategy is based on the areas of the cup and disc ACDR. The latter is considered the best approximation because, as the cup may be oriented at different angles, ACDR measures will not be skewed unlike the VCDR and HCDR. These measures could reflect direction influences [28]. Figure 11 shows the quantities involved in VCDR, HCDR, and ACDR.

Figure 11: Quantities involved in CDR measurements. (a) VCDR, (b) HCDR, and (c) ACDR. Although, in (c), a rounded area is marked, accurate OD and cup borders will provide accurate ACDR results.

The equations for these parameters are where is the vertical OD diameter, is the horizontal OD diameter, is the vertical cup diameter, is the horizontal cup diameter, is the area of the OD diameter, and is the area of the cup.

Following these definitions, we have computed VCDR, HCDR, and ACDR for the 60 images in Dataset2 and using the three proposed methods: (1) without postprocessing, (2) with AC postprocessing, and (3) with AC and ellipse fitting. In addition, we have calculated these measures on the manually freehand-marked retinographies and on the manual ellipse-fitted images. The results have been compared using the absolute vertical (Eabsv), the absolute horizontal (Eabsh), and the absolute area (EabsA) errors.

The subindex refers to a value calculated with the proposed algorithm with any of its two postprocessing versions or with the manually ellipse fitting technique. The subindex GT refers to the value calculated from the ground truth images, that is, the freehand manually annotated set.

Additional error measurements have been calculated: relative vertical (Erelv), relative horizontal (Erelh), and relative area (ErelA) errors.

Mean and standard deviation of errors are presented in Table 3.

Table 3: VCDR, HCDR, and ACDR computation performances using Dataset2. The values are differences to the gold standard. (a) Smoothed with AC, (b) smoothed and ellipse fitted, and (c) manually ellipse fitted.

The results of Table 3 show that the proposed automatic algorithm with AC and ellipse fitting results is the best approach regarding VCDR, ACDR, and HCDR.

3.2.3. Comparison with State-of-the-Art Techniques

It is difficult to compare the performance of the proposed strategy to other state-of-the-art approaches mainly because each method uses different image databases that are in many occasions private and unavailable. However, we present the mean absolute error of 4 reported methods in Table 4, in order to obtain an overall quality comparison [35].

Table 4: Mean absolute error of four contributions to obtain an overall quality comparison.

Table 4 indicates that the proposed method is outperforming other reported strategies in one order of magnitude.

4. Conclusion

Glaucoma is a silent disease that needs to be early diagnosed to prevent associated blindness. One of the main indicators of the disease is the CDR ratio, computed as the ratio of the OD and cup regions. These regions must be segmented from the retinal image. Most of the state-of-the-art techniques are devoted to the segmentation of the OD because the cup is a difficult area from the point of view of image processing: its absolute colour may differ from one patient to another and its border could be diffuse or even imperceptible. Most of the authors agree that the OD and cup are the brightest regions in the retinal fundus image and the use of grey level processing to segment both of them is necessary. The majority of the methods comprise several complex steps that usually rely on experimentally fixed parameters. In addition, blood vessels must be detected and inpainted prior to OD and cup detection. Therefore, complexity is added and errors are propagated. Additionally, vector colour information is not taken into account and human perception is generally forgotten. In this paper, we have addressed the problem of CDR computation on retinal fundus images from the point of view of colour science. Characteristic colour changes of OD and cup edges were calculated in a uniform colour space with a perception-adapted distance metric allowing an additional level of correlation with a human visual system. We have tested six different classifiers with 60 images selected from seven different public databases to build a robust and precise model for CDR computing. As a result, bagged decision trees were found to produce accurate classification results (95.02%). Then, the model was validated on a completely different database that included 60 images of high complexity. Again, the method showed accurate results (81.19%), proving the fact that it generalizes well despite the used database. CDR measurements based on this automatic method are accurate at the light of the obtained mean absolute and relative errors. To sum up, we have presented an accurate, robust method, based on the kind of images available in primary healthcare settings, that calculates glaucoma indicators using colour information. Future work will address the use of this system in mobile applications.

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This work was supported by the Government of Spain (Grant no. TEC2014-53103-P).

References

  1. WHO, WHO, vol. 82, no. 11, pp. 811–890, 2011, November 2004.
  2. M. Fallon, O. Valero, M. Pazos, and A. Antón, “Diagnostic accuracy of imaging devices in glaucoma: a meta-analysis,” Survey of Ophthalmology, vol. 62, no. 4, pp. 446–461, 2017. View at Publisher · View at Google Scholar
  3. M. D. Abràmoff, M. K. Garvin, and M. Sonka, “Retinal imaging and image analysis,” IEEE Reviews in Biomedical Engineering, vol. 3, pp. 169–208, 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. E. Ng, U. Acharya, J. Suri, and A. Campilho, Image Analysis and Modeling in Ophthalmology, CRC Press, 2014. View at Publisher · View at Google Scholar
  5. M. Lotankar, K. Noronha, and J. Koti, “Detection of optic disc and cup from color retinal images for automated diagnosis of glaucoma,” in 2015 IEEE UP Section Conference on Electrical Computer and Electronics (UPCON), Allahabad, India, 2016. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Liu, F. S. Yin, D. W. K. Wong et al., “Automatic glaucoma diagnosis from fundus image,” in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3383–3386, Boston, MA, USA, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. K. Banister, C. Boachie, R. Bourne et al., “Can automated imaging for optic disc and retinal nerve fiber layer analysis aid glaucoma detection?” Ophthalmology, vol. 123, no. 5, pp. 930–938, 2016. View at Publisher · View at Google Scholar · View at Scopus
  8. G. Lim, Y. Cheng, W. Hsu, and M. L. Lee, “Integrated optic disc and cup segmentation with deep learning,” in 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 162–169, Vietri sul Mare, Italy, 2016. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Bharkad, “Automatic segmentation of optic disk in retinal images,” Biomedical Signal Processing and Control, vol. 31, pp. 483–498, 2017. View at Publisher · View at Google Scholar
  10. S. Sedai, P. K. Roy, D. Mahapatra, and R. Garnavi, “Segmentation of optic disc and optic cup in retinal fundus images using shape regression,” in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3260–3264, Orlando, FL, USA, 2016. View at Publisher · View at Google Scholar
  11. D. Díaz-Pernil, I. Fondón, F. Peña-Cantillana, and M. A. Gutiérrez-Naranjo, “Fully automatized parallel segmentation of the optic disc in retinal fundus images,” Pattern Recognition Letters, vol. 83, pp. 99–107, 2016. View at Publisher · View at Google Scholar · View at Scopus
  12. M. Naser Langroudi and H. Sadjedi, “A new method for automatic detection and diagnosis of retinopathy diseases in colour fundus images based on morphology,” in 2010 International Conference on Bioinformatics and Biomedical Technology, pp. 134–138, Chengdu, China, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. X. Xu, M. Niemeijer, Q. Song et al., “Vessel boundary delineation on fundus images using graph-based approach,” IEEE Transactions on Medical Imaging, vol. 30, no. 6, pp. 1184–1191, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Madhusudhan, N. Malay, S. R. Nirmala, and D. Samerendra, “Image processing techniques for glaucoma detection,” in Advances in Computing and Communications, pp. 365–373, Springer, Berlin, Heidelberg, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. J. R. H. Kumar, A. K. Pediredla, and C. S. Seelamantula, “Active discs for automated optic disc segmentation,” in 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 225–229, Orlando, FL, USA, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Akyol, B. Şen, and Ş. Bayir, “Automatic detection of optic disc in retinal image by using keypoint detection, texture analysis, and visual dictionary techniques,” Computational and Mathematical Methods in Medicine, vol. 2016, Article ID 6814791, 10 pages, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. J. A. de Sousa, A. C. de Paiva, J. D. Sousa de Almeida, A. C. Silva, G. B. Junior, and M. Gattass, “Texture based on geostatistic for glaucoma diagnosis from fundus eye image,” Multimedia Tools and Applications, vol. 76, no. 18, pp. 19173–19190, 2017. View at Publisher · View at Google Scholar
  18. B. Dai, X. Wu, and W. Bu, “Optic disc segmentation based on variational model with multiple energies,” Pattern Recognition, vol. 64, pp. 226–235, 2017. View at Publisher · View at Google Scholar
  19. M. P. Sarathi, M. K. Dutta, A. Singh, and C. M. Travieso, “Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images,” Biomedical Signal Processing and Control, vol. 25, pp. 108–117, 2016. View at Publisher · View at Google Scholar · View at Scopus
  20. N. P. Singh and R. Srivastava, “Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter,” Computer Methods and Programs in Biomedicine, vol. 129, pp. 40–50, 2016. View at Publisher · View at Google Scholar · View at Scopus
  21. B. Dashtbozorg, A. M. Mendonça, and A. Campilho, “Optic disc segmentation using the sliding band filter,” Computers in Biology and Medicine, vol. 56, pp. 1–12, 2015. View at Publisher · View at Google Scholar · View at Scopus
  22. I. Fondon, J. F. Valverde, A. Sarmiento, Q. Abbas, S. Jimenez, and P. Alemany, “Automatic optic cup segmentation algorithm for retinal fundus images based on random forest classifier,” in IEEE EUROCON 2015 - International Conference on Computer as a Tool (EUROCON), pp. 1–6, Salamanca, Spain, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. A. Almazroa, W. Sun, S. Alodhayb, K. Raahemifar, and V. Lakshminarayanan, “Optic disc segmentation: level set methods and blood vessels inpainting,” in Medical Imaging 2017: Imaging Informatics for Healthcare, Research, and Applications. SPIE Medical Imaging, article 1013806, pp. 1013806–1013806, Orlando, Florida, USA, 2017. View at Publisher · View at Google Scholar
  24. N. D. Salih, M. D. Saleh, C. Eswaran, and J. Abdullah, “Fast optic disc segmentation using FFT-based template-matching and region-growing techniques,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, pp. 1–12, 2017. View at Publisher · View at Google Scholar
  25. F. Ortuño and I. Rojas, “Optic disc segmentation with Kapur-ScPSO based cascade multithresholding,” in IWBBIO 2016: Bioinformatics and Biomedical Engineering, pp. 206–215, 2016. View at Publisher · View at Google Scholar · View at Scopus
  26. A. Septiarini, A. Harjoko, R. Pulungan, and R. Ekantini, “Optic disc and cup segmentation by automatic thresholding with morphological operation for glaucoma evaluation,” Signal, Image and Video Processing, vol. 11, no. 5, pp. 945–952, 2016. View at Publisher · View at Google Scholar · View at Scopus
  27. A. A. Salam, T. Khalil, M. U. Akram, A. Jameel, and I. Basit, “Automated detection of glaucoma using structural and non structural features,” SpringerPlus, vol. 5, no. 1, p. 1519, 2016. View at Publisher · View at Google Scholar · View at Scopus
  28. P. S. Mittapalli and G. B. Kande, “Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma,” Biomedical Signal Processing and Control, vol. 24, pp. 34–46, 2016. View at Publisher · View at Google Scholar · View at Scopus
  29. U. R. Acharya, M. R. K. Mookiah, J. E. W. Koh et al., “Automated screening system for retinal health using bi-dimensional empirical mode decomposition and integrated index,” Computers in Biology and Medicine, vol. 75, pp. 54–62, 2016. View at Publisher · View at Google Scholar · View at Scopus
  30. A. Agarwal, S. Gulia, S. Chaudhary, M. K. Dutta, R. Burget, and K. Riha, “Automatic glaucoma detection using adaptive threshold based technique in fundus image,” in 2015 38th International Conference on Telecommunications and Signal Processing (TSP), pp. 416–420, Prague, Czech Republic, 2015. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Mohammad and D. T. Morris, “Texture analysis for glaucoma classification,” in 2015 International Conference on BioSignal Analysis, Processing and Systems (ICBAPS), vol. 1, pp. 98–103, Kuala Lumpur, Malaysia, 2015. View at Publisher · View at Google Scholar · View at Scopus
  32. D. Marin, M. E. Gegundez-Arias, A. Suero, and J. M. Bravo, “Obtaining optic disc center and pixel region by automatic thresholding methods on morphologically processed fundus images,” Computer Methods and Programs in Biomedicine, vol. 118, no. 2, pp. 173–185, 2015. View at Publisher · View at Google Scholar · View at Scopus
  33. N. E. A. Khalid, N. M. Noor, and N. M. Ariff, “Fuzzy c-means (FCM) for optic cup and disc segmentation with morphological operation,” Procedia Computer Science, vol. 42, no. C, pp. 255–262, 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. I. Fondón, F. Núñez, M. Tirado et al., “Automatic cup-to-disc ratio estimation using active contours and color clustering in fundus images for glaucoma diagnosis,” in ICIAR 2012: Image Analysis and Recognition, vol. 7325 of Lecture Notes in Computer Science, PART 2, pp. 390–399, 2012. View at Publisher · View at Google Scholar · View at Scopus
  35. A. Singh, M. K. Dutta, M. ParthaSarathi, V. Uher, and R. Burget, “Image processing based automatic diagnosis of glaucoma using wavelet features of segmented optic disc from fundus image,” Computer Methods and Programs in Biomedicine, vol. 124, pp. 108–120, 2016. View at Publisher · View at Google Scholar · View at Scopus
  36. C.-K. Lu, T. B. Tang, A. Laude, B. Dhillon, and A. F. Murray, “Parapapillary atrophy and optic disc region assessment (PANDORA): retinal imaging tool for assessment of the optic disc and parapapillary atrophy,” Journal of Biomedical Optics, vol. 17, no. 10, article 1060101, 2012. View at Publisher · View at Google Scholar · View at Scopus
  37. A. Gopalakrishnan, A. Almazroa, K. Raahemifar, V. Lakshminarayanan, and A. Preprocessing, “Optic disc segmentation using circular Hough transform and curve fitting,” in 2015 2nd International Conference on Opto-Electronics and Applied Optics (IEM OPTRONIX), vol. 1, 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. M. C. V. S. Mary, E. B. Rajsingh, J. K. K. Jacob, D. Anandhi, U. Amato, and S. E. Selvan, “An empirical study on optic disc segmentation using an active contour model,” Biomedical Signal Processing and Control, vol. 18, pp. 19–29, 2015. View at Publisher · View at Google Scholar · View at Scopus
  39. A. Issac, M. Partha Sarathi, and M. K. Dutta, “An adaptive threshold based image processing technique for improved glaucoma detection and classification,” Computer Methods and Programs in Biomedicine, vol. 122, no. 2, pp. 229–244, 2015. View at Publisher · View at Google Scholar · View at Scopus
  40. A. Issac, M. Parthasarthi, and M. K. Dutta, “An adaptive threshold based algorithm for optic disc and cup segmentation in fundus images,” in 2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 143–147, Noida, India, 2015. View at Publisher · View at Google Scholar · View at Scopus
  41. A. A. Salam, M. U. Akram, K. Wazir, S. M. Anwar, and M. Majid, “Autonomous glaucoma detection from fundus image using cup to disc ratio and hybrid features,” in 2015 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), pp. 370–374, Abu Dhabi, United Arab Emirates, 2015. View at Publisher · View at Google Scholar · View at Scopus
  42. Q. Abbas, I. Fondón, S. Jiménez, and P. Alemany, Automatic Detection of Optic Disc from Retinal Fundus Images Using Dynamic Programming, vol. 7325 of Lecture Notes in Computer Science, Part 2, Springer, Berlin, Heidelberg, 2012. View at Publisher · View at Google Scholar · View at Scopus
  43. S. Morales, V. Naranjo, J. Angulo, and M. Alcaniz, “Automatic detection of optic disc based on PCA and mathematical morphology,” IEEE Transactions on Medical Imaging, vol. 32, no. 4, pp. 786–796, 2013. View at Publisher · View at Google Scholar · View at Scopus
  44. A. Chakravarty and J. Sivaswamy, “Glaucoma classification with a fusion of segmentation and image-based features,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), vol. 2016, pp. 689–692, Prague, Czech Republic, 2016. View at Publisher · View at Google Scholar · View at Scopus
  45. M. Emre Celebi and S. Bogdan, Advances in Low-Level Color Image Processing, vol. 11 of Lecture Notes in Computational Vision and Biomechanics, Springer, Dordrecht, [Netherlands], 2014.
  46. E. A. Krupinski, “Current perspectives in medical image perception,” Attention, Perception & Psychophysics, vol. 72, no. 5, pp. 1205–1217, 2010. View at Publisher · View at Google Scholar · View at Scopus
  47. I. Fondon, M. J. J. P. Van Grinsven, C. I. Sanchez, and A. Saez, “Perceptually adapted method for optic disc detection on retinal fundus images,” in Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, pp. 279–284, Porto, Portugal, 2013. View at Publisher · View at Google Scholar · View at Scopus
  48. M. E. A. Bechar, N. Settouti, V. Barra, and M. A. Chikh, “Semi-supervised superpixel classification for medical images segmentation: application to detection of glaucoma disease,” Multidimensional Systems and Signal Processing, pp. 1–20, 2017. View at Publisher · View at Google Scholar
  49. J. Zilly, J. M. Buhmann, and D. Mahapatra, “Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation,” Computerized Medical Imaging and Graphics, vol. 55, pp. 28–41, 2017. View at Publisher · View at Google Scholar
  50. I. Fondon, J. F. Valverde, A. Sarmiento, Q. Abbas, S. Jimenez, and P. Alemany, “Automatic optic cup segmentation algorithm for retinal fundus images based on random forest classifier,” in IEEE EUROCON 2015-International Conference on Computer as a Tool (EUROCON), Salamanca, Spain, 2015, no. 1. View at Publisher · View at Google Scholar · View at Scopus
  51. A. Diaz, S. Morales, V. Naranjo, P. Alcocer, and A. Lanzagorta, “Glaucoma diagnosis by means of optic cup feature analysis in color fundus images,” in 2016 24th European Signal Processing Conference (EUSIPCO), vol. 2, pp. 2055–2059, Budapest, Hungary. View at Publisher · View at Google Scholar · View at Scopus
  52. V. Naranjo, C. J. Saez, S. Morales, K. Engan, and G. Soledad, “Optic cup characterization through sparse representation and dictionary learning,” in 2016 24th European Signal Processing Conference (EUSIPCO), pp. 1688–1692, Budapest, Hungary, 2016. View at Publisher · View at Google Scholar · View at Scopus
  53. J. Ayub, J. Ahmad, J. Muhammad et al., “Glaucoma detection through optic disc and cup segmentation using K-mean clustering,” in 2016 International Conference on Computing, Electronic and Electrical Engineering (ICE Cube), pp. 143–147, Quetta, Pakistan, 2016. View at Publisher · View at Google Scholar · View at Scopus
  54. N. M. Tan, Y. Xu, W. B. Goh, and J. Liu, “Robust multi-scale superpixel classification for optic cup localization,” Computerized Medical Imaging and Graphics, vol. 40, pp. 182–193, 2015. View at Publisher · View at Google Scholar · View at Scopus
  55. E. Decencière, X. Zhang, G. Cazuguel et al., “Feedback on a publicly distributed image database: the Messidor database,” Image Analysis & Stereology, vol. 33, no. 3, p. 231, 2014. View at Publisher · View at Google Scholar · View at Scopus
  56. J. Staal, M. D. Abramoff, M. Niemeijer, M. A. Viergever, and B. van Ginneken, “Ridge-based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging, vol. 23, no. 4, pp. 501–509, 2004. View at Publisher · View at Google Scholar · View at Scopus
  57. J. Odstrčilík, J. Jan, J. Gazárek, and R. Kolář, “Improvement of vessel segmentation by matched filtering in colour retinal images,” in World Congress on Medical Physics and Biomedical Engineering, Munich, Germany, 2009. View at Publisher · View at Google Scholar
  58. A. Hoover and M. Goldbaum, “Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels,” IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 951–958, 2003. View at Publisher · View at Google Scholar · View at Scopus
  59. L. Giancardo, F. Meriaudeau, T. P. Karnowski et al., “Exudate-based diabetic macular edema detection in fundus images using publicly available datasets,” Medical Image Analysis, vol. 16, no. 1, pp. 216–226, 2012. View at Publisher · View at Google Scholar · View at Scopus
  60. T. Kauppi, V. Kalesnykiene, J.-K. Kamarainen et al., DIARETDB0: Evaluation Database and Methodology for Diabetic Retinopathy Algorithms, 2006.
  61. P. Bankhead, C. N. Scholfield, J. G. McGeown, and T. M. Curtis, “Fast retinal vessel detection and measurement using wavelets and edge location refinement,” PLoS One, vol. 7, no. 3, article e32435, 2012. View at Publisher · View at Google Scholar · View at Scopus
  62. S.-Y. Zhu, K. N. Plataniotis, C. Science, and A. N. Venetsanopoulos, “Comprehensive analysis of edge detection in color image processing,” Optical Engineering, vol. 38, pp. 612–625, 1999. View at Publisher · View at Google Scholar · View at Scopus
  63. C. Serrano, A. Sáez, B. Acha, and C. S. Mendoza, “Development and evaluation of perceptually adapted colour gradients,” IET Image Processing, vol. 7, no. 4, pp. 355–363, 2013. View at Publisher · View at Google Scholar · View at Scopus
  64. D. Bales, P. A. Tarazaga, M. Kasarda et al., “Gender classification of walkers via underfloor accelerometer measurements,” IEEE Internet of Things Journal, vol. 3, no. 6, pp. 1259–1266, 2016. View at Publisher · View at Google Scholar
  65. P.-N. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining, Pearson Addison Wesley, 2005. View at Publisher · View at Google Scholar
  66. Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, no. 1, pp. 119–139, 1997. View at Publisher · View at Google Scholar
  67. A. Thrift, F. Kanwal, and H. El-Serag, “Prediction models for gastrointestinal and liver diseases: too many developed, too few validated,” Clinics in Gastroenterology, vol. 14, no. 12, pp. 1678–1680, 2016. View at Publisher · View at Google Scholar · View at Scopus
  68. T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Transactions on Image Processing, vol. 10, no. 2, pp. 266–277, 2001. View at Publisher · View at Google Scholar · View at Scopus