Table of Contents Author Guidelines Submit a Manuscript
International Journal of Biomedical Imaging
Volume 2016 (2016), Article ID 7468953, 9 pages
Research Article

Automated Fovea Detection in Spectral Domain Optical Coherence Tomography Scans of Exudative Macular Disease

1Christian Doppler Laboratory for Ophthalmic Image Analysis (OPTIMA), Vienna Reading Center, Department of Ophthalmology, Medical University of Vienna, Vienna, Austria
2Christian Doppler Laboratory for Ophthalmic Image Analysis (OPTIMA), Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria

Received 10 May 2016; Revised 29 July 2016; Accepted 2 August 2016

Academic Editor: Chunhui Li

Copyright © 2016 Jing Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


In macular spectral domain optical coherence tomography (SD-OCT) volumes, detection of the foveal center is required for accurate and reproducible follow-up studies, structure function correlation, and measurement grid positioning. However, disease can cause severe obscuring or deformation of the fovea, thus presenting a major challenge in automated detection. We propose a fully automated fovea detection algorithm to extract the fovea position in SD-OCT volumes of eyes with exudative maculopathy. The fovea is classified into 3 main appearances to both specify the detection algorithm used and reduce computational complexity. Based on foveal type classification, the fovea position is computed based on retinal nerve fiber layer thickness. Mean absolute distance between system and clinical expert annotated fovea positions from a dataset comprised of 240 SD-OCT volumes was 162.3 µm in cystoid macular edema and 262 µm in nAMD. The presented method has cross-vendor functionality, while demonstrating accurate and reliable performance close to typical expert interobserver agreement. The automatically detected fovea positions may be used as landmarks for intra- and cross-patient registration and to create a joint reference frame for extraction of spatiotemporal features in “big data.” Furthermore, reliable analyses of retinal thickness, as well as retinal structure function correlation, may be facilitated.

1. Introduction

Spectral domain optical coherence tomography (SD-OCT) is a noninvasive modality for acquiring high resolution, 3D cross-sectional images of the retina, and is today the most important ancillary test for the diagnosis and management of macular diseases [1]. Usually, serial OCT acquisitions are compared over time to determine disease progression and/or treatment response. However, as a result of acquisitions at multiple time-points, motion or imaging related registration errors commonly occur, that is, scans of the same eye at different time-points may be aligned incorrectly. Such registration artefacts may have a severe effect on the ability to perform accurate and reproducible analysis of subtle changes over time [25]. This problem can be overcome by computationally aligning sequential OCT scans using automated registration [2], or by labour-intensive manual alignment of the OCT scans or measurement grids.

In both cases, the fundamental requirement for registration of OCT volumes is the use of adequate anatomical landmarks. Further to the retinal vasculature, which has been used previously to register OCT volumes [2], the fovea centralis is a particularly important registration landmark. For example, correct identification of the foveal position is key for automated retinal thickness assessment using fovea-centered measurement grids such as the early treatment diabetic retinopathy study (ETDRS) grid and rotational alignment of the measurement grid in circumpapillary retinal nerve fiber layer measurements for glaucoma and multiple sclerosis. Due to its relevance for visual acuity, knowledge of the exact foveal position is also critical for studies of structure–function correlation [6]. However, given the immensely large scale of imaging data both in modern clinical practice and research, fully automated analysis methods are required for efficient OCT analysis.

To our knowledge, computational detection of the fovea in SD-OCT has been limited to healthy or dry-AMD cases [713]. Thus, the major challenge of this work is to develop a detection method for the fovea in SD-OCT scans of exudative macular disease where the retina has been heavily deformed by the presence of fluid, severely altering the retinal anatomy.

In this article, we present a fully automated fovea detection method that is capable of accurately and reproducibly identifying the position of the fovea in cross-vendor longitudinal OCT scans of both normal and patients suffering from exudative macular disease, that is, cystoid macular edema secondary to retinal vein occlusion (RVO) and neovascular age-related macular degeneration (nAMD). In our method, we consider specific disease morphology to account for the differences between disease types. Our goal is to demonstrate the accuracy of the presented fully automated detection system and thus the systems’ feasibility for detection of the foveal position in “big data.”

2. Methods

2.1. Imaging Data

For this study, 704 clinical SD-OCT imaging datasets from the Vienna Reading Center’s (VRC) image database were used. 494 scans were selected from multicenter trials evaluating ranibizumab for central or branch RVO ( identifiers, NCT01535261 and NCT01599650), and 210 scans from a multicenter trial evaluating ranibizumab for nAMD ( identifier, NCT01780935). The study was conducted in compliance with the tenets set forth in the Declaration of Helsinki. The randomized clinical trials from which the scans were obtained were approved by the institutional review board of each participating center. All patients gave written consent for participation in the respective trial and all data was appropriately anonymized.

For method development, distinct training and testing image datasets were constructed. The training set, used to optimize the detection algorithms, consisted of 180 scans, divided into three equally sized groups representing branch RVO, central RVO, and nAMD, randomly selected from the baseline time-point where disease is most prevalent. The unseen testing set (used to validate the final algorithms) was comprised of 240 scans divided into the disease groups branch RVO (42 Heidelberg Spectralis, 33 Zeiss Cirrus, 5 Topcon), central RVO (53 Heidelberg Spectralis, 23 Zeiss Cirrus, 4 Topcon), and neovascular AMD (48 Heidelberg Spectralis, 32 Zeiss Cirrus), comprised of 80 scans each (all distinct from the training set), randomly selected to be inclusive of various acquisition time-points. In both the training and testing datasets, each scan was acquired from a distinct patient.

For all 420 scans, to provide an objective and standardized ground truth, the position of the fovea was manually annotated by expertly trained graders from the VRC using validated custom software [14]. The diagnostic criteria for the foveal center included (1) minimization of the retinal nerve fiber layer (RNFL) thickness; (2) presence of a foveal depression; (3) focal elongation of the photoreceptor outer segment signal, as described previously [15].

Inclusion criteria for dataset construction included acquisition from multiple devices (Zeiss Cirrus, Heidelberg Spectralis, and Topcon 3D OCT 2000) as well as baseline and nonbaseline time-points. Each OCT volume has average physical dimensions of 6 × 6 × 2 mm3 () with slice thickness ranging from 11.72 μm to 122.2 μm. Dependent on device, this physical dimension may equate to 200 × 200 × 1024 (Zeiss Cirrus), 256 × 256 × 885 (Topcon 3D OCT 2000), 512 × 128 × 885 (Topcon 3D OCT 2000), 512 × 128 × 1024 (Zeiss Cirrus), or 512 × 49 × 496 (Heidelberg Spectralis) pixels.

2.2. Fovea Detection Preprocessing

The flow diagram in Figure 1 illustrates the three major stages comprising the automated fovea detection algorithm presented here, that is, image preprocessing, fovea type distinction, and fovea type based fovea position detection, which are discussed in detail below.

Figure 1: Flow diagram showing the major stages of the fovea detection algorithm. The three major components are comprised of image preprocessing, fovea type distinction, and fovea type based fovea position detection.

The preprocessing stage is shown in Figure 2. Firstly, motion correction in the plane is performed on the entire image volume using the local-symmetry estimation method described in [16] to remove motion artefacts caused by microsaccadic eye movement. This can be seen in Figure 2 in the left image showing the retina orientation flattened. Secondly, tilt correction, also described in [16], is performed in the B-scan plane reducing the horizontal tilt of the retina. The resulting motion corrected volume is denoted as .

Figure 2: Flow diagram showing overview of fovea detection preprocessing stage. From left to right, the stages, motion correction, Kernel graph-cut segmentation, fovea region masking, and finally ILM delineation, are illustrated. The delineated ILM surface in the far right image is illustrated as a white curve.

Thirdly, denoising is performed on to reduce speckle noise using the block matching collaborative filtering approach described in [17], giving .

On , a kernel graph-cut segmentation algorithm [18] is applied to delineate the inner limiting membrane (ILM), as well as intraretinal cystoid fluid (IRF) regions, resulting in the segmented volume . Finally, computational complexity is reduced by masking . A 2D elliptical mask , of size and , is constructed in the XY plane centered at a statistically generalised mean fovea position, where and are and image dimension sizes. This center point has been computed by averaging manually annotated fovea positions from the training dataset, annotated by expert graders at the VRC, and was found to be within = 80 μm and = 140 μm of the scan center, based on their relative distances from the respective scan centers. As a result, the scan center is used, with all image information outside the mask area removed from the plane which is then propagated into 3D, resulting in a cylindrical region containing the fovea, . This cylindrical region is exemplified in the third image of Figure 2, showing the masked B-scan.

After this preprocessing stage, ILM delineation is performed again on to extract the cropped ILM surface for fovea appearance analysis.

2.3. Appearance Based Fovea Detection
2.3.1. Fovea Appearance Classification

Given the distinct appearance of the fovea between healthy and diseased, as well as disease-specific variability, different methods for fovea position detection are required. In exudative maculopathy, pathologies that affect the fovea may be categorised into intraretinal cystoid fluid (IRF) resulting in foveal edema, IRF resulting asymmetrical foveal edema, subretinal fluid (SRF), and pigment epithelial detachment (PED). Thus, to further simplify the detection problem, we can generalise the fovea appearance into a small finite number of types based on their appearance when normal and pathological. Three main appearances of the fovea caused by the above described pathologic lesions have been reported previously [19] (Figure 3). In the first case (Figure 3(a)), the fovea appears as a depression which we denote as a normal foveal depression (NFD). This is the appearance of the fovea when little to no disease is present. Cases with more severe pathology may be categorised into two major appearances, further exemplified in [20], where the fovea has been deformed by pathology such as IRF.

Figure 3: Exemplar SD-OCT B-scan fovea appearances (fovea region outlined in red) (a) normal, (b) minor depression, and (c) absent depression. (d–f) Respective fovea appearances from region outlined in red (a–c) visualized as a 3D mesh.

In minor foveal depression (MFD, Figure 3(b)), the fovea appears as a depression smaller than in a NFD case that has been raised in direction by macular edema. Finally, in absent foveal depression (AFD, Figure 3(c)), the fovea is not visible as a depression; instead, the ILM appears as a parabola, again elevated in the dimension by retinal edema below.

Automated Distinction of Foveal Appearance Types. Automated computational diagnosis of the NFD type examines the RNFL layer thickness using the clinical description of minimum RNFL thickness to describe the fovea in normal cases. This method has been previously described in detail in [19].

Distinction of the MFD type focusses on the unique morphology of this fovea appearance in the form of a minor depression that has been elevated in z-axis primarily due to IRF regions (Figure 3(b)). From the ILM segmentation in computed in the preprocessing stage (Figure 3(e)), maxima examination of this surface section is performed on a B-scan by B-scan basis. For the scan to feature a MFD, the masked surface must be comprised of 2D surface segments treated as a curve function featuring a global maximum and a further local maximum representing the two peaks seen in Figure 3(b). A peak is defined as a data point in the curve that is larger than its two neighbouring data points. Confirmation of maxima presence is performed by extension into the third dimension, where the required maxima configuration must be present across a contiguous distance of 150 μm, given a mean fovea centralis diameter of 1.5 mm. A physical distance is used rather than number of B-scans due to cross-vendor variable slice thickness. This is further exemplified in Figure 3(e) as a mesh representation of an exemplar MFD showing the two computed peaks.

Distinction of the AFD is based on the identification of a global maximum, across a contiguous 150 μm distance, similar in appearance to a conical shape, as seen in Figure 3(f).

2.3.2. Fovea Position Detection

From the fovea appearance classification stage, a given retinal SD-OCT volume is classified as featuring one of the three fovea appearances described in Section 2.3.1. In the fovea position detection stage, appearance specific detection functions have been developed to compute the fovea position, as described in this section.

NFD Fovea Detection. In regard to anatomical features, we know that the fovea is the point at which the RNFL thickness is zero. Thus, in the NFD case, we can delineate and extract the two required surfaces (ILM and RNFL) using the graph-cut based retinal surface segmentation algorithm described in [21]. In the NFD detection method, we are only interested in zero thickness in the fovea region, which has been masked and represented as . NFD fovea position detection is illustrated in Figure 4.

Figure 4: (a) Exemplar ILM (red) and RNFL (yellow) surface segmentations with zero thickness region at the red arrow. (b) RNFL thickness map showing the dark blue zero thickness region in the center. Foveal masking excludes zero thickness regions in the temporal retina.

In the event that multiple zero thickness points are identified, the center of mass is computed and taken as the fovea position. This method is universally applicable to NFD scans.

MFD Fovea Detection. The minimum thickness method for computing fovea position described in the NFD case can no longer be relied upon in the MFD case as this foveal appearance features a retina that has been deformed by pathology such as IRF. As a result, accurate RNFL segmentation is no longer reliable, specifically in the region of interest around the fovea where retinal edema is most prevalent. Thus, to locate the position where the RNFL is thinnest, we compute the distance between the ILM surface and the IRF causing the disruption in the retinal anatomy, as illustrated in Figure 5. Analysis of the manual fovea positions in the training dataset has shown that in all cases where the normal foveal depression was elevated by the presence of IRFs and creating the MFD, the B-scan on which the fovea annotation was performed corresponded to the B-scan where the distance between ILM and IRF boundary was thinnest in all corresponding cases.

Figure 5: Exemplar B-scan showing MFD (a). Foveal region outlined in red and magnified to show the minimum distance between the ILM surface and IRF (b), used as an indicating feature of fovea position.

However, due to the appearance and morphology of the deformed pathological fovea region, accurately computing the distance may be challenging. Thus, an additional preprocessing step is required, that is, ILM delineation of in pathological scans.

Due to the graph-search based retinal surface segmentation algorithm described in [21] performing less adequately in pathological scans as opposed to normal scans, a proprietary ILM segmentation algorithm was developed based on the kernel graph-cut method described in [18]. In this multiregion approach, image partitioning is achieved using kernel mapping of the SD-OCT B-scan. Each B-scan is transformed implicitly by a kernel function in order to apply graph-cut formulation to the problem. In this case, we applied to describe the number of regions to segment, as well as a relaxed smoothing constraint to ensure that the ILM surface boundary is delineated as accurately as possible, as opposed to the smoothed surface from [21]. This ensures that the ILM surface used for RNFL thickness measurement in pathological cases is accurate. Vector describes this surface.

From , given delineated ILM from pathological scan and that the IRF below the fovea is labelled as the lowest intensity region within the retina delineated in , the distance between ILM and IRF is computed to calculate the fovea position.

Vector () is computed from the segmented candidate IRF region from within comprised of the boundary points of the region. Furthermore, the minimum distance between ILM (upper surface, Figure 6) and IRF points (lower surface, Figure 6) is computed using pairwise Euclidean distance computation (1) between and (arrows, Figure 6): where is the vector of distances between the vector of points (ILM surface) and (IRF surface).

Figure 6: (a) Graphical representation of minimum distance computation between ILM surface (upper) and IRF boundary (lower) in MFD. (b) Exemplar minimum distance computation between ILM and IRF in AFD.

The resulting pairwise distances are sorted in ascending order (ignoring the anatomically impossible zero thickness), choosing the shortest distance (thickest arrows, Figure 6) and the corresponding point from . However, the volumetric characteristic is also important to consider as the retinal SD-OCT scan is a volume. Thus, similar to the appearance classification phase, the fovea detection algorithm is performed on every B-scan in the masked fovea region in the event multiple B-scans that have identical minimum distances. In this event, the center of mass is computed as the fovea position in the plane with position obtained from the ILM segmented.

AFD Fovea Detection. The major characteristic of the AFD is the parabolic appearance of the ILM, again due to deformation by IRF, and, as such, the same method used to compute the fovea position in MFD cases is applied. This is illustrated in Figure 6(a). Furthermore, as with the NFD case a similar correlation was found between the manually annotated fovea positions in AFD cases within the test dataset and distance between ILM and IRF boundary.

2.4. Statistical Analysis

The performance of the developed algorithms was evaluated on the unseen validation dataset. For categorical variables (i.e., fovea appearance type), the accuracy was descriptively analysed using confusion matrices and the area under the receiver operating characteristics (ROC) curve. Furthermore, the agreement between the automated and manual diagnosis was evaluated using Pearson correlation. For continuous variables (i.e., fovea position), the distance between manual and automated fovea positions was described as mean with 95% confidence intervals. Furthermore, the correspondence between manual and automated fovea positions was characterized using Pearson’s correlation coefficient. The formal significance level was set at .

3. Results

Implementation of the proposed method was carried out using MATLAB (Version R2012b, The Mathworks Inc.) on an Intel Core i7, 3.5 GHz, with 32 GB RAM.

Of the testing dataset described in Section 2.1, four central RVO (cRVO) and 6 branch RVO (bRVO) scans were excluded due to poor signal and image quality from acquisition. Thus, the validation dataset comprised 230 SD-OCT scans (76 cRVO, 74 bRVO, and 80 nAMD) SD-OCT scans.

3.1. Fovea Appearance Classification

Table 1 presents the fovea appearance classification validation, comparing system results with ground truth fovea appearances labelled as either NFD, MFD, or AFD. Automated fovea appearance classification resulted in an 84%, 89%, and 88% correct appearance distinction for bRVO, cRVO, and nAMD, respectively, based on comparison with expert annotation. The area under the ROC curve (AUROC) was computed as 0.956, 0.949, and 0.938 for bRVO, cRVO, and nAMD, respectively. Furthermore, agreement between grader and automated appearance classification using the Cohen’s kappa coefficient () was , , and for bRVO, cRVO, and nAMD, respectively.

Table 1: Result of fovea appearance validation, comparing system results with manual expert ground truth fovea appearance classification. The values in bold are scans where the automated fovea appearance classification has identified the fovea appearance correctly based on manual expert ground truth comparison.
3.2. Fovea Position Detection

Validation of fovea position detection is presented in Table 2 for bRVO, cRVO, and nAMD, showing overall , , and absolute distance and per device, representing the distances between system results and expert annotated manual fovea positions. For bRVO, fovea position showed a mean ± standard deviation (SD) difference in μm of 92.62 ± 16.48 and 95% confidence interval (CI) of 88.87 to 96.37 and fovea position showed a mean ± SD difference in μm of 129.1 ± 10.64 and 95% CI of 126.7 to 131.5. For cRVO, fovea position showed a mean ± SD difference in μm of 130.6 ± 61.4 and 95% CI of 116.8 to 144.4 and fovea position demonstrated a mean ± SD difference in μm of 125.3 ± 31.45 and 95% CI of 118.2 to 132.4. For nAMD, and fovea position showed a mean difference in μm of 160.1 ± 50.09 and 95% CI of 149.1 to 171.1 and fovea position showed a mean difference in μm of 146.4 ± 43.66 and 95% CI of 136.8 to 155.9.

Table 2: Results of fovea position validation against expert grader ground truth positions for , , and absolute distances. Examined are mean distance overall and device specific distances. The lowest distances are highlighted in bold.

Correlation between automated and manual fovea positions was tested using the Pearson correlation coefficient. For bRVO, fovea position presents Pearson’s (), and fovea position (). For cRVO, fovea position presents Pearson’s (), and fovea position (). For nAMD, fovea position presents Pearson’s () and fovea position ().

4. Discussion

In this article, we present a fully automated system for classification of the foveal shape and detection of the foveal position in SD-OCT volume scans with exudative maculopathy. Validation against manual ground truth provided by an experienced reading center demonstrated excellent accuracy of the automated system. Furthermore, our method showed applicability across nAMD and RVO diseases as well as across several prevalent SD-OCT devices.

Examination of the results of the first major contribution of this work, fovea appearance classification, shows a correct classification of 84%, 89%, and 88% for bRVO, cRVO, and nAMD cases, respectively, against manually annotated fovea positions. This is further corroborated by receiver operating characteristic where the AUROC was 0.956, 0.949, and 0.938 for bRVO, cRVO, and nAMD, respectively. Thus our method shows a high degree of accuracy for fovea appearance classification when validated using a dataset comprised of variable anatomical fovea appearance (bRVO: NFD = 19, MFD = 4, AFD = 29; cRVO: NFD = 17, MFD = 14, AFD = 39; nAMD: NFD = 53, MFD = 10, AFD = 7 scans).

Analysis of failure cases for bRVO and cRVO scans attribute incorrect appearance classification to incorrect delineation of the ILM surface, as a result of image/signal quality. As can be seen in Table 2, the largest proportion of error cases was MFD classified as AFD in both RVO types. Examination of these cases show poor signal and image quality in the B-scan plane within the retina as a result of acquisition/scanning artefacts and inability for either the layer segmentation described in [21] or our proprietary ILM segmentation to delineate an accurate ILM surface. As a result, the delineated ILM surface is not representative of the actual surface appearance, causing the incorrect classification of MFD as AFD. However, in nAMD cases, the majority of failures occurred when the system identified MFD as NFD. This is attributed to zero thickness computation by the system whereas a human grader has identified an elevation of the foveal depression as dictated by the MFD appearance guidelines.

In the second major contribution of this work, examination of the results of fovea position detection indicates a low absolute distance between the automatically computed fovea positions and manual ground truth for bRVO, cRVO, and nAMD, as well as on a per device basis. Overall, the most accurate automated fovea position detection was seen in bRVO, followed by cRVO, and then nAMD. Examination on a per device basis shows the fovea to be detected most accurately in Spectralis images for bRVO and Cirrus images in both cRVO and nAMD. Based on image quality, it would be expected that detecting the fovea within Spectralis images results in the highest accuracy; however, fovea appearance classification and position detection also rely heavily on the delineation of pathology, in this case, IRFs. As a result, the quality of the imaged pathology, which varies from acquisition to acquisition and is affected by other factors such as imaging artefacts and patient motion, will also play a role in the accuracy of the resulting detected fovea position. Comparisons of the automatically detected fovea positions against their manually annotated counterparts show high correlation (>0.9) for and fovea positions in all disease types, except for fovea position in nAMD. The probable cause for this poorer correlation is likely due to the more variable disease features of AMD in comparison to RVO, thus resulting in more variable fovea position as detected by the automated method. Nevertheless, relative distances between the automatically detected fovea positions and their manually annotated ground truth counterparts remain low.

The examination of automatically detected fovea positions use manually annotated positions for evaluation; thus the accuracy of the ground truth must also be taken into consideration when evaluating the automated results. Not only is this task time consuming to perform precisely for human graders, particularly in a “big data” setting, but also the criteria used to determine the position will also be affected by human subjectivity. Previous studies revealed an mean interobserver variability of 71.63 μm for the foveal position but also a mean distance between the true and device center point of 290.9 μm [15]. Thus, the agreement between human experts is higher compared to our automated system; however, our system accuracy is greater than that obtained by the devices specified in [15], at a mean of 195.5 μm, illustrating the applicability of our method in comparison to other center point detection algorithms and as a more practical alternative to manual center point plotting in a “big data” environment. Furthermore, B-scan spacing must also be considered as the disparity in the number of B-scans from Spectralis, to Cirrus and Topcon varies up to a factor of 1 : 5. As a result, computation of distance between system and ground truth will be affected. For example, a fovea position misaligned in dimension by a single B-scan in Cirrus (200 B-scans) would result in an error of ~30 μm, whereas the same misalignment in Spectralis would result in an error of ~125 μm. Thus, such disparity will affect human grader ground truth, and, by extension, system result validation, and given that the test dataset used here features Heidelberg Spectralis scans in the majority (~60%, 143 of 240), explaining the higher distance error reported by the system presented here.

To the best of our knowledge, this is the first fully automated method to locate the position of the fovea directly within retinal SD-OCT scans, independent of whether they are highly pathological due to the presence of cystoid macular edema secondary to central and branch RVO, and neovascular AMD, or within the healthy retina [7] or only via a fundus photograph [813]. Future work would concentrate on extension of the IRF segmentation to incorporate a machine learning approach that distinguishes IRF from SRF, allowing for targeted IRF delineation as a feature for computing the fovea position. In addition, the relationship between the fovea region and the thickness of other retinal layers may be explored and used as further anatomical features for detection.

In summary, we have presented a fully automated approach to detect the fovea within healthy and diseased SD-OCT scans of the macula. This enables the use of the fovea as a key landmark in the construction of a population reference frame to identify and extract key spatiotemporal features from a large patient population comprised of different time-points, devices, and imaging modalities. Furthermore, being the functional center of vision, the fovea is crucial for performing analyses of retinal structure/function correlation [22, 23].

Competing Interests

The authors declare that they have no competing interests.


  1. W. Geitzenauer, C. K. Hitzenberger, and U. M. Schmidt-Erfurth, “Retinal optical coherence tomography: past, present and future perspectives,” British Journal of Ophthalmology, vol. 95, no. 2, pp. 171–177, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Wu, B. Gerendas, S. Waldstein, G. Langs, C. Simader, and U. Schmidt-Erfurth, “Stable registration of pathological 3D SD-OCT scans using retinal vessels,” in Proceedings of the Ophthalmic Medical Image Analysis First International Workshop (OMIA '14), Held in Conjunction with MICCAI 2014, Boston, Mass, USA, 2014.
  3. R. Kolar and P. Tasevsky, “Registration of 3D retinal optical coherence tomography data and 2D fundus images,” in Biomedical Image Registration, pp. 72–82, Springer, Berlin, Germany, 2010. View at Google Scholar
  4. M. Golabbakhsh and H. Rabbani, “Vessel-based registration of fundus and optical coherence tomography projection images of retina using a quadratic registration model,” IET Image Processing, vol. 7, no. 8, pp. 768–776, 2013. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Li and G. Gregori, “Registration error analysis of the ridge-based retinal image registration algorithm for OCT fundus ges and color fundus phtographs,” Investigative Ophthalmology & Visual Science, vol. 51, no. 13, p. 3862, 2010. View at Google Scholar
  6. S. M. Waldstein, A.-M. Philip, R. Leitner et al., “Correlation of 3-dimensionally quantified intraretinal and subretinal fluid with visual acuity in neovascular age-related macular degeneration,” JAMA Ophthalmology, vol. 134, no. 2, pp. 182–190, 2016. View at Publisher · View at Google Scholar · View at Scopus
  7. F. Wang, G. Gregori, P. Rosenfeld, B. J. Lujan, M. K. Durbin, and H. Bagherinia, “Automated detection of the foveal center improves SD-OCT measurements of central retinal thickness,” Ophthalmic Surgery, Lasers & Imaging, vol. 43, supplement 6, pp. S32–S37, 2012. View at Publisher · View at Google Scholar
  8. M. Niemeijer, M. D. Abràmoff, and B. van Ginneken, “Fast detection of the optic disc and fovea in color fundus photographs,” Medical Image Analysis, vol. 13, no. 6, pp. 859–870, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. M. V. Ibañez and A. Simó, “Bayesian detection of the fovea in eye fundus angiographies,” Pattern Recognition Letters, vol. 20, no. 2, pp. 229–240, 1999. View at Publisher · View at Google Scholar · View at Scopus
  10. E.-F. Kao, P.-C. Lin, M.-C. Chou, T.-S. Jaw, and G.-C. Liu, “Automated detection of fovea in fundus images based on vessel-free zone and adaptive Gaussian template,” Computer Methods and Programs in Biomedicine, vol. 117, no. 2, pp. 92–103, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. C. Sinthanayothin, J. F. Boyce, H. L. Cook, and T. H. Williamson, “Automated localisation of the optic disc, fovea, and retinal blood vessels from digital colour fundus images,” British Journal of Ophthalmology, vol. 83, no. 8, pp. 902–910, 1999. View at Publisher · View at Google Scholar · View at Scopus
  12. Z. Liang, D. W. K. Wong, J. Liu et al., “Automatic fovea detection in retinal fundus images,” in Proceedings of the 7th IEEE Conference on Industrial Electronics and Applications (ICIEA '12), pp. 1746–1750, Singapore, July 2012. View at Publisher · View at Google Scholar · View at Scopus
  13. H. Narasimha-Iyer, T. Schmoll, U. Sharma, S. Srivastava, and A. Tumlinson, “Systems and methods for improved acquisition of ophthalmic optical coherence tomography data,” USA Patent US20140268046, September 2014.
  14. S. M. Waldstein, B. S. Gerendas, A. Montuoro, C. Simader, and U. Schmidt-Erfurth, “Quantitative comparison of macular segmentation performance using identical retinal regions across multiple spectral-domain optical coherence tomography instruments,” British Journal of Ophthalmology, vol. 99, no. 6, pp. 794–800, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. B. Gerendas, S. Waldstein, J. Lammer et al., “Centerpoint replotting and its effects on central retinal thickness in four prevalent SD-OCT devices,” Investigative Ophthalmology & Visual Science, vol. 53, p. 4114, 2012. View at Google Scholar
  16. A. Montuoro, S. M. Waldstein, B. S. Gerendas, G. Langs, C. Simader, and U. Schmidt-Erfurth, “Motion artefact correction in retinal optical coherence tomography using local symmetry,” in Proceedings of the 17th International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI '14), Boston, Mass, USA, September 2014.
  17. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. M. B. Salah, A. Mitiche, and I. B. Ayed, “Multiregion image segmentation by parametric kernel graph cuts,” IEEE Transactions on Image Processing, vol. 20, no. 2, pp. 545–557, 2011. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Wu, S. M. Waldstein, B. S. Gerendas, G. Langs, C. Simader, and U. Schmidt-Erfurth, “Automated retinal fovea type distinction in spectral-domain optical coherence tomography of retinal vein occlusion,” in Proceedings of the SPIE Medical Imaging, Orlando, Fla, USA, 2015.
  20. C. A. Kiire, S. Broadgate, S. Halford, and V. Chong, “Diabetic macular edema with foveal eversion shows a distinct cytokine profile to other forms of diabetic macular edema in patients with type 2 diabetes,” Investigative Ophthalmology & Visual Science, vol. 55, no. 13, p. 4408, 2014. View at Google Scholar
  21. M. K. Garvin, M. D. Abràmoff, X. Wu, S. R. Russell, T. L. Burns, and M. Sonka, “Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images,” IEEE Transactions on Medical Imaging, vol. 28, no. 9, pp. 1436–1447, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. L. Pelosini, C. C. Hull, J. F. Boyce, D. McHugh, M. R. Stanford, and J. Marshall, “Optical coherence tomography may be used to predict visual acuity in patients with macular edema,” Investigative Ophthalmology & Visual Science, vol. 52, no. 5, pp. 2741–2748, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. P. Roberts, T. J. A. Mittermueller, A. Montuoro et al., “A quantitative approach to identify morphological features relevant for visual function in ranibizumab therapy of neovascular AMD,” Investigative Ophthalmology & Visual Science, vol. 55, no. 10, pp. 6623–6630, 2014. View at Publisher · View at Google Scholar · View at Scopus