International Journal of Biomedical Imaging

International Journal of Biomedical Imaging / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 531319 | https://doi.org/10.1155/2012/531319

Pierrick Coupé, Pierre Hellier, Xavier Morandi, Christian Barillot, "3D Rigid Registration of Intraoperative Ultrasound and Preoperative MR Brain Images Based on Hyperechogenic Structures", International Journal of Biomedical Imaging, vol. 2012, Article ID 531319, 14 pages, 2012. https://doi.org/10.1155/2012/531319

3D Rigid Registration of Intraoperative Ultrasound and Preoperative MR Brain Images Based on Hyperechogenic Structures

Academic Editor: Habib Zaidi
Received18 Dec 2010
Revised10 Oct 2011
Accepted13 Oct 2011
Published19 Jan 2012

Abstract

The registration of intraoperative ultrasound (US) images with preoperative magnetic resonance (MR) images is a challenging problem due to the difference of information contained in each image modality. To overcome this difficulty, we introduce a new probabilistic function based on the matching of cerebral hyperechogenic structures. In brain imaging, these structures are the liquid interfaces such as the cerebral falx and the sulci, and the lesions when the corresponding tissue is hyperechogenic. The registration procedure is achieved by maximizing the joint probability for a voxel to be included in hyperechogenic structures in both modalities. Experiments were carried out on real datasets acquired during neurosurgical procedures. The proposed validation framework is based on (i) visual assessment, (ii) manual expert estimations , and (iii) a robustness study. Results show that the proposed method (i) is visually efficient, (ii) produces no statistically different registration accuracy compared to manual-based expert registration, and (iii) converges robustly. Finally, the computation time required by our method is compatible with intraoperative use.

1. Introduction

Due to its low cost, its real-time imaging capabilities, and its noninvasive nature, ultrasound (US) imaging has become a popular modality. These attributes have been used for a large number of clinical applications. In neurosurgery, ultrasound imaging has been employed in many cases of brain examinations over the last two decades [1]. Several studies demonstrated that ultrasonography can be used for locating tumors, defining their margins, differentiating their internal characteristics, and for detecting of brain shift and residual tumoral tissues [2]. At present, 3D US imaging is integrated within the neuronavigation systems to provide a useful and efficient intraoperative tool [3]. Ultrasound imaging has also been shown to be a promising method for quantifying and for correcting brain shift in Image-Guided Neurosurgery (IGNS) [414].

During a neurosurgical procedure, the ultrasound probe is tracked by the neuronavigation system which computes the 3D positions and orientations of the B-scans. Matching between the intraoperative US images and the preoperative MR image is ensured by a rigid registration. In phantom [6] and animal studies [11], the matching accuracy between intraoperative B-scans and preoperative images has been quantified between 1.5 mm and 3 mm. Nevertheless, in clinical context, the matching error can reach 10 mm (see Table 1). This error includes tool calibration errors (the position localizer and the US probe), tool localization errors (tracking system error), and registration errors from the neuronavigation system.


A priori estimation of the registration: initial error in mm
Mean (std) Expert 1 Expert 2 Expert 3 value

Patient 15.52 (1.15)4.31 (1.55)5.00 (1.50) 0.30
Patient 28.64 (0.89)8.31 (1.24)8.76 (1.04) 0.69
Patient 33.56 (1.09)4.61 (1.39)4.38 (1.13) 0.13

Registration approaches based on classical image similarity measures such as the Sum Square Difference (SSD), Mutual Information (MI), or Correlation Ratio (CR) are known to fail to robustly register MR and US images [15]. Therefore, other options have been investigated.(a)Landmark-based registration represents the majority of the approaches [6, 7, 9, 13, 14, 16]. The motivation is bound to the difficulty of finding a function matching US image intensities with MR image intensities. These methods are based on the matching of manually defined points [7], lines representing the vascular system [6, 13, 14, 16], or cortical surface [9].(b)Intensity-based approaches using histogram-based similarity measures tend to overcome the problem by preprocessing the images in order to register more similar images [4, 17].(c)By introducing the Bivariate Correlation Ratio (BCR), Roche et al. [15] incorporated the transformation of MR to pseudo-US image as a function into the similarity measure.

In this paper, we propose a new objective function based on the matching of the cerebral hyperechogenic structures such as sulci and the cerebral falx, and the lesion when the corresponding tissue is hyperechogenic. The registration is achieved by maximizing the correlation value between the US image and the probabilistic map of hyperechogenic structures estimated from MR image. The proposed method is thus a compromise between landmark and intensity-based approaches.(i)As with landmark-based approaches, only regions considered as relevant are used to drive the registration procedure. In our method, these regions are the hyperechogenic structures of the brain.(ii)As with intensity-based methods, the proposed approach does not require segmentation of the US image which is a challenging problem.

2. Materials and Methods

2.1. Method Overview

The scheme of the overall workflow is presented in Figure 1. First, the “hyperechogenic’’ structures present in MR image (i.e., the structures visible in MR image expected to be hyperechogenic in intraoperative US) are detected with the operator [18, 19]. In brain imaging, these structures are the liquid interfaces such as the cerebral falx and the sulci, in addition to the lesions when the corresponding tissue is hyperechogenic (e.g., cavernoma or glioma). The curvature-based operator was first introduced in [18, 19] before being used to detect the sulci and the cerebral falx in [2022]. The US image and the probability map of the hyperechogenic structures extracted from MR image are then registered by maximizing the probability for a voxel to be included in hyperechogenic structures in both modalities.

Contrary to histogram-based approaches that match all the information in both images, the proposed approach consists of matching only hyperechogenic structures [23], which makes it more robust to artefacts such as acoustic shadows. Indeed, in US imaging, the bright areas provide information on the underlying structures whereas the dark areas can correspond to the underlying anatomical structure or acoustic shadows [24]. Moreover, the accuracy of sulci matching is an important issue since these structures are used by the neurosurgeon during the neurosurgical procedure [25]. Finally, by using the natural property of US imaging to detect the hyperechogenic structures, the method does not require segmentation of the US image. This way, the method is less sensitive to error of US image segmentation and is less time consuming during the intraoperative stage.

2.2. Probabilistic Objective Function

The proposed registration process is based on the estimation of the transformation maximizing the joint probability for a voxel to be included in hyperechogenic structures in both modalities: where is the probability for to be included in an hyperechogenic structure from the US image and is the probability for to be included in an hyperechogenic structure (in the sense of the ultrasound image) from the T1-w MR image. Assuming that the probabilities are independent, we can write Our objective function can be viewed as the maximization of the correlation value between the two probability maps of hyperechogenic structures extracted from both modalities.

2.3. Construction of the Probability Maps

In order to construct the probability maps, we define a function matching the intensity of both the US image and the MR image with the probability for to be included in hyperechogenic structures: where is an image defined on .

2.3.1. Intraoperative US Image

For the intraoperative US image , by definition is the identity function: The intensity of is only scaled between 0 and 1 during surgery to fit with our probabilistic framework.

2.3.2. Preoperative MR Image

For the preoperative MR image , the evaluation of is done prior to surgery and is based on both the detection of the liquid interfaces with the operator and the segmentation of the pathological tissues.

The operator is a robust intensity-based curvature detector [18] based on the first and second derivatives of the image intensities. The first and second derivatives are combined to obtain an operator less sensitive to flat areas with low gradients. This kind of operator is used to detect ridge-like features in images, with negative value for crests in the intensity domain and positive value for valleys in the intensity domain. In [19], the has been proposed for multimodal registration of CT and MR images. In our case, as in [2022], the is used to extract the hyperechogenic structures (sulci, cerebral falx) from T1-w MR image. Overall, curvature information has been used by several other authors to characterize cortical features [2628]. Most of these methods are based on geodesic curvature computed on cortical surfaces.

In T1-w MR images, the sulci are valleys (negative ridges) in the intensity domain. By using the positive values of , the sulci and the cerebral falx can be efficiently detected [2022]. Figures 2, 3, and 4 show the positive values of operator.

Finally, our function is defined as where is the indicator function for the set :(i), (ii)to the lesional tissue}.

As for the US intensities, the positive values of the are scaled between 0 and 1. The operator is defined in 3D as where . is the probability given to in the segmentation of pathological tissue . is used to incorporate a priori on pathology. For pathological tissue such as cavernoma or low-grade glioma, is high since these tissues are hyperechogenic.

2.4. Preprocessing of the MR Data before Surgery

First, skull stripping is performed from the T1-w MRI sequence [29]. We choose to remove the skull prior to computation because this structure does not appear in the area of the craniotomy. The raw MR images are then denoised using an optimized Non-Local Means filter (https://www.irisa.fr/visages/benchmarks/) [30] before applying the operator to the brain tissues. The use of a denoising stage makes the computation of the more stable. Indeed, the presence of noise may create false positive or negative curvatures which could bias the registration framework. After applying the operator, only the positive values (i.e., the sulci and the falx) are kept in the processing stream. Finally, the map and the segmentation are merged together (see Figures 2, 3, and 4). In our experiments, the segmentation of pathology was manually performed by the neuroanatomist before the surgical procedure (see Figure 1). The computational time required by preprocessing steps performed during preoperative stage was 4 minutes for skull stripping and 3 minutes for denoising on a Pentium M 2 GHz. In addition, 3–8 minutes were required for manual segmentation of the lesion according to its size on a Stealth Station TREON (Medtronic Inc., Minneapolis, USA). Since these steps are performed before surgery, there is no impact on practical value of the proposed method.

2.5. Data Acquisition

T1-w SENSE 3D sequences were used to acquire preoperative T1-weighted MR images on a 3T Philips Gyroscan scanner (Best, the Netherlands). During the neurosurgical procedure, the US probe (Sonosite Inc. Bothell, WA. USA, cranial 7–4 MHz probe) was tracked by the Polaris cameras of the Stealth Station TREON (Medtronic Inc., Minneapolis, USA). The SonoNav software of the neuronavigation system was used to acquire the 2D B-scans and the probe positions. From the 2D B-scans and their positions, a 3D volume was reconstructed with the Probe Trajectory method [31]. The experiments were carried out on 3 patients. For each patient, a sequence of images was acquired before opening the dura. Some studies have considered quantitative measurement of brain shift during surgical procedures and showed that nonsignificant displacement occurred before dura opening [32, 33]. Thus, we assumed that the transformation between intraoperative US and preoperative MR was rigid. The characteristics of reconstructed volumes are(i)for patient 1 a 3D volume of voxels with a resolution of  mm3,(ii)for patient 2 a 3D volume of voxels with a resolution of  mm3,(iii)for patient 3 a 3D volume of voxels with a resolution of  mm3.

2.6. MR-US Registration of the Neuronavigation System

During all the neurosurgical procedure, the coordinate system of the preoperative MR image and the coordinate system of the intraoperative field are related by a rigid registration. The rigid registration of the neuronavigation system is based on surface matching between the preoperative MR image and the position of points acquired on the patient’s head with the position localizer. First, the skin is extracted from the MR image by manual thresholding. A cloud of points is then continuously acquired on the patient’s head close to the eyes region by moving the position localizer. Following this, one point is acquired on each ear with another point on the extremity of the patient’s nose. Finally, the neuronavigation system performs a points to surface matching.

According to phantom and animal studies, the errors in probe calibration, 3D localization of the probe, and rigid registration performed by the neuronavigation system lead to a global error less than 3 mm [6, 10, 11]. The error due to the 3D localization of the probe is estimated to 0.35 mm for each marker on a tool from the manufacturer [34]. The error due to the calibration is generally estimated around 1.5 mm [6, 11]. In our case, the probe was calibrated with a Z-wire phantom by the manufacturer. Finally, the error due to rigid registration performed by the neuronavigation system has been estimated to be around 1.5 mm in [11].

2.7. Pathology of the Patients

In this study, hyperechogenic pathologies such as cavernoma (patient 1, see Figure 5 and patient 2, see Figure 6) and low-grade glioma (patient 3, see Figure 7) were considered. In T1-w MR images, the central part of cavernoma is usually heterogeneous (hyper- and hyposignal) and the outlying area appears in hyposignal. The low-grade gliomas are more homogeneous and appear in hyposignal in T1-w MR images. In US images, numerous studies showed that all solid brain tumors, metastatic brain lesions, and cavernomas exhibited echogenicity [3539]. For brain gliomas, the higher its grade (more malignant), the more echogenic it is in US and the less homogeneous it appears. In our study, the corresponding lesional tissues were considered both homogeneous and hyperechogenic in US images. As such, was set to 1 for all segmentation of pathological tissue (see (5)). Typical examples of intraoperative images and probability maps are presented in Figures 2, 3, and 4.

2.8. Parameter Settings

The maximization of the joint probability (see (2)) is performed within a multiresolution procedure using the simplex algorithm [40]. During the experiments, the parameters of the simplex algorithm were tolerance = 0.1, stepsize = 1.5, and maximum number of iterations = 100. The coarsest resolution corresponded to the original volumes downsampled by a factor 3 and the finest resolution was that of the original volumes. The registration procedures take less than two minutes on Intel Pentium M at 2 GHz.

As with most derivative-based operators, the operator uses Gaussian kernel to compute the image derivatives. In [18], the authors showed that the convolution of the image with a derivative Gaussian kernel provides a well-posed approach of the differentiation problem. The standard deviation of the Gaussian kernel is called the image scale. This parameter has been shown as very stable for MR image sulci segmentation on numerous works [2022], and thus, no tuning has been done for this parameter throughout our study. In our experiments, an image scale of 2 voxels has been used to compute the values. This value is consistent with other works [2022] conducted on brain cortical segmentation where the scale parameter was always kept in this range.

2.9. Evaluation Framework

In order to evaluate our method, a validation framework with different approaches is proposed.(i)First, a visual assessment is proposed.(ii)Second, a manual validation by experts is presented. This validation is divided in two parts: a point-based estimation of the rigid registration by 3 experts for the 3 patients and an evaluation of the residual error by all experts for 1 patient (postregistration error).(iii)Third, a study on convergence robustness was carried out.

The expert manual validation was difficult due to the time required. For each expert, 4 hours were required to perform the a priori estimation of the transformation for 3 patients.

2.9.1. Visual Assessment

The visual assessment remains a valuable indicator of the registration accuracy. In [41], the observer discernibility of registration errors has been estimated around 0.2 mm. A study on visual inspection for image registration assessment can be found in [42]. In our paper, we propose an overlay of US and MR images before and after registration to assess the registration accuracy.

2.9.2. Validation by Experts

First, the experts manually evaluate the rigid transformation between the intraoperative US and the MR image resliced with the rigid transformation given by the neuronavigation system. This estimation is denoted as a priori estimation of the registration. From this a priori estimation, the initial error (i.e., after the registration performed by the neuronavigation system) and the Target Registration Error (TRE) can be computed. The a priori estimation of the registration is used to show that there are no statistical differences between the expert-based transformations and the transformation estimated with our method in terms of the TRE.

The experts estimate the residual error after rigid registration based on a given transformation (either by our method or the point-based expert registrations). This estimation is called a posteriori evaluation of the residual error and is designed to show that experts do not detect significant differences when they inspect the registered volumes with our method or with their own manually defined transformations.

A Priori Estimation of the Registration
Point Picking
The a priori estimation of the registration is based on the location of ten points in the US image and the ten corresponding points in the MR image: each expert defines a set of point in the 3D reconstruction of the intraoperative ultrasound and its corresponding landmark in the resliced MR image. The resliced MR image is obtained with the rigid registration given by the neuronavigation system and has the same resolutions, dimensions, and field of view as the reconstructed US image. During the experiments, the experts used three orthogonal 2D views to define homologous points in the 3D volumes. For each volume, the visualization software was run independently, with the cursors in the two volumes unlinked. Each expert was allowed to choose their set of homologous points.
Initial Error
The initial error is computed by using the mean Euclidean distance between the homologous points defined by the experts in both modalities. The three samples (one per expert) containing the ten error values (one per point) are compared by using a Kruskal-Wallis test.
Target Registration Error
A leave-one-out procedure is used to compute the TRE of each point (i.e., Euclidean distance between homologous point after rigid transformation). First, one of the ten homologous points is removed from the set of points. The nine remaining homologous points are then used to compute a rigid transformation in the least squares sense. Finally, this rigid transformation is used to compute the TRE of the initially removed point. This procedure is repeated for all the ten points. The final TRE is the mean TRE over all the points. For each patient, the expert-based TRE and the TRE obtained with our method are compared by using a nonparametric Kruskal-Wallis test.

A Posteriori Evaluation of the Residual Error
Point Picking
First, the patient images are registered using several transformations. These transformations are (i) the three expert-based transformations (), (ii) the rigid transformation obtained with our method (), and (iii) the transformation computed using all the thirty points defined by the three experts . The experts then define ten homologous points on the registered volumes. This procedure is performed for the five studied registrations on the patient 2 dataset. The positions of the points are fixed for all the experts.
Residual Error
As for initial error, the final error or residual error is simply obtained by computing the mean Euclidean distance between the homologous points defined by the experts in both modalities. The statistical comparison of the residual errors is performed on the five samples (one per transformation) containing ten errors values (one per point) with a Kruskal-Wallis test.

2.9.3. Robustness Study

First, the US and resliced MR images of patient 2 are registered with the transformation . Then, 100 rigid transformations are randomly generated with a translation along each axis uniformly distributed between 0 and 5 mm and with a rotation around each axis uniformly distributed between 0 and 5 degrees. Finally, each transformation is applied to the resliced MR image before performing registrations with the proposed method. The warping index [43] is used to compute the distance between the estimated transformation by the registration process and the true transformation : where is the -norm. The success rate is estimated by considering a success as a registration with a warping index inferior to 3.5 mm. This threshold has been chosen close to the upper bound of the TRE estimated by the experts (see distribution for patient 2 in Figure 9). Contrary to TRE estimated over selected points, the warping index is computed as the average error between the volumes over all the voxels.

3. Results

3.1. Visual Assessment

The registration results are first displayed for visual assessment. The results obtained with our method are presented in Figures 5, 6, and 7. For patient 1 (see Figure 5), even if the lesion was not entirely included in the US volume, the proposed registration procedure converged efficiently. For patient 2 (see Figure 6), acoustic shadows are present on the US image. The signal below the lesion tends to zero. The proposed approach overcomes these artifacts without specific detection of the shadows. For patient 3 (see Figure 7), despite the large size of the low-grade glioma and the limited field of view, our approach performed well.

3.2. Validation by Experts
3.2.1. A Priori Estimation of the Registration

Table 1 presents the estimated initial error for the three patients by the three experts. The value of the Kruskal-Wallis test showed that there was no significant difference between the expert estimations. Table 1 also shows the interindividual variability for the same measure between the experts. Figure 8 summarizes the distribution of the error.

The estimated initial errors are significantly higher than values given in [6, 11] (<3 mm) or by the manufacturer (<1.5 mm). It is important to note that the ultrasound images used in our experiments were acquired in clinical context during a neurosurgical operation. The real neurosurgery context is likely more difficult than phantom and animal studies.

Table 2 shows the TRE estimated by each expert, for each patient dataset. In all the cases, there were no statistically significant differences between the TRE obtained with expert-based estimations and the TRE obtained with our method. Figure 9 shows the result of the Kruskal-Wallis test. In all the cases, the experts and our method provided consistent results.


A priori estimation of the registration: target registration error in mm
Mean (std) Expert 1 Method value Expert 2 Method value Expert 3 Method value

Patient 1 0.28 0.50 0.50
Patient 2 0.68 0.76 0.71
Patient 3 0.20 0.97 0.88

3.2.2. A Posteriori Evaluation of the Residual Error

Table 3 shows the expert-based estimation of the a posteriori residual error of the different registrations (manual-based and automatic ) proposed for patient 2. The Kruskal-Wallis test shows that the errors associated with the transformations () and are not significantly different. Figure 10 shows the statistical distribution of the residual error for each transformation compared. Finally, the experts failed to detect significant differences between the manual-based registrations and our automatic registration. The residual error estimated by experts is around 1–1.5 mm for all the transformations.


A posteriori evaluation of the residual error in mm
Mean (std) value

Expert 1 0.88
Expert 2 0.95
Expert 3 0.58

3.3. Robustness Study

Table 4 shows the robustness and the warping index results obtained during the experiment. The proposed method obtained of success rate with a mean warping index of 2.38 mm. This value is relative to the TRE of the used gold standard. Thus, it gives information about the distance between the transformation from all the experts () and the final transformation provided by our method. This value is close to the TRE estimated for patient 2 in Table 2. Figure 11 shows the distribution of the warping index.


Success rate in % in mm (Mean (std))

92

4. Discussion and Conclusion

This paper presents a new framework for the 3D rigid registration of US and T1-w MR brain images. In order to address this challenging problem, we propose an innovative probabilistic objective function that maximizes the joint probability of the (i) a priori most probable locations of hyperechogenic structure in the preoperative MR image and (ii) the highest intensities of the intraoperative US images. We show that the proposed method enables a robust registration of MR and US images in a computational time compatible with clinical use. All our experiments were carried out on real intraoperative data. The expert-based quantitative study shows that our method produces no statistically different registration compared to the a priori estimation of the registration by the experts. Moreover, the a posteriori estimation of the residual registration error shows that the experts failed to detect differences between manual registration and our automatic registration.

During our experiments, manual segmentation has been used to build the probability map. This segmentation is always available, since the neurosurgeon performed it before the surgery. In this paper, the used segmentations were the segmentations dedicated to the neurosurgery. However, the segmentation of the lesion could be automated [44, 45], and the different parts of pathologies (lesion, coagulated blood, cyst, necrotic tissue, etc.) could be defined. Through this, the simple model of homogeneous hyperechogenic lesion used in our experiment could be improved by using different hyperechogenic levels to the different pathological tissues. To evaluate the robustness of our method to heterogeneous lesion, more datasets are needed although this situation was present in case of patient 1.

The proposed method is related to the segmentation accuracy of the tumor in preoperative MR images. Although the segmentation of the MR image is not considered as difficult, this step may introduce some errors. Our experiments showed that the proposed method produced consistent results with manual segmentation used in clinical routine.

The presented clinical datasets showed that our method is robust to some discrepancies between the features present in both US and MRI probability maps. Based on the correlation of maps where only regions considered as relevant are used to drive the registration procedure, our method is able to deal with partially missing information resulting from a limited field of view or acoustic shadows. In case of patient 1 (see Figures 2 and 5), only a subpart of the lesion was visible in the reconstructed US image. In case of patient 2 (see Figures 3 and 6) the acoustic shadow below the lesion reduced information around ventricle in US. Moreover, the information derived from sulci was much more present in MR maps than in US map. Finally, in case of patient 3 (see Figures 4 and 7) the limited field of view and the large size of glioma reduced the importance of sulcal information. However, experiments using only the segmentation of the lesion or only sulcal information derived from operator failed to provide satisfactory registration. This seems to indicate that a certain amount of homologous features has to be present in both probability maps to enable the method working.

Finally, in our opinion, the proposed approach relies on a similar and complementary idea to the vessel-based method proposed by Reinertsen et al. [13, 14]. Indeed, in both cases, an implicit segmentation of salient features in US images (hyperechogenic structures in B-mode or vessels in Doppler) is matched with corresponding structures detected in MR images. Only the selected salient features differ between the methods. In [13, 14], the method utilizes vessels extracted from Doppler US images and their segmentation from MR images. Therefore, both methods have the advantage of not requiring segmentation of the US image and also being robust to US artefacts. However, the extraction of the vessel centerlines from MR images is a challenging problem and requires extensive processing.

Our method is dedicated to brain US imaging since operator is relevant for sulci and cerebral falx detection. As such, the application of the proposed framework to another body part requires adaptation of the hyperechogenic structure detection. Moreover, if T2-w MR image or another sequence is used as preoperative MR image, the selected values of the need to be adapted. Since the final aim of this US/MR registration method is to compensate for the brainshift, further works will investigate extension of our probabilistic objective function to non-rigid deformations.

Acknowledgment

The authors are grateful for Sean Jy-Shyang Chen for proofreading the paper.

References

  1. J. M. Rubin, M. Mirfakhraee, and E. E. Duda, “Intraoperative ultrasound examination of the brain,” Radiology, vol. 137, no. 3, pp. 831–832, 1980. View at: Google Scholar
  2. G. J. Dohrmann and J. M. Rubin, “History of intraoperative ultrasound in neurosurgery,” Neurosurgery Clinics of North America, vol. 12, no. 1, pp. 155–166, 2001. View at: Google Scholar
  3. G. Unsgaard, O. M. Rygh, T. Selbekk et al., “Intra-operative 3D ultrasound in neurosurgery,” Acta Neurochirurgica, vol. 148, no. 3, pp. 235–253, 2006. View at: Publisher Site | Google Scholar
  4. T. Arbel, X. Morandi, M. Comeau, and D. L. Collins, “Automatic nonlinear MRI-ultrasound registration for the correction of intra-operative brain deformations,” in Proceedings of the 4th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '01), vol. 2208 of Lecture Notes in Computer Science, pp. 913–922, Springer, Utrecht, The Netherlands, 2001. View at: Google Scholar
  5. R. D. Bucholz, D. D. Yeh, J. Trobaugh et al., “The correction of stereotactic inaccuracy caused by brain shift using an intraoperative ultrasound device,” in Proceedings of the 1st Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medical Robotics and Computer-Assisted Surgery (CVRMed and MRCAS '97), vol. 1205 of Lecture Notes in Computer Science, pp. 459–466, Grenoble, France, 1997. View at: Publisher Site | Google Scholar
  6. R. M. Comeau, A. F. Sadikot, A. Fenster, and T. M. Peters, “Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery,” Medical Physics, vol. 27, no. 4, pp. 787–800, 2000. View at: Publisher Site | Google Scholar
  7. D. G. Gobbi, R. M. Comeau, and T. M. Peters, “Ultrasound/MRI overlay with image warping for neurosurgery,” in Proceedings of the 3rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '00), vol. 1935 of Lecture Notes in Computer Science, pp. 106–114, Springer, Pittsburgh, Pa, USA, 2000. View at: Google Scholar
  8. N. Hata, M. Suzuki, T. Dohi, H. Iseki, K. Takakura, and D. Hashimoto, “Registration of ultrasound echography for intraoperative use: a newly developed multiproperty method,” in Visualization in Biomedical Computing, vol. 2359 of Proceedings of SPIE, pp. 251–259, Rochester, MN, USA, 1994. View at: Publisher Site | Google Scholar
  9. A. P. King, J. M. Blackall, G. P. Penney, P. J. Edwards, D. L. G. Hill, and D. J. Hawkes, “Bayesian estimation of intra-operative deformation for image-guided surgery using 3-D ultrasound,” in Proceedings of the 3rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '00), vol. 1935 of Lecture Notes in Computer Science, pp. 588–597, Springer, Pittsburgh, Pa, USA, 2000. View at: Google Scholar
  10. M. M. J. Letteboer, P. W. A. Willems, M. A. Viergever, and W. J. Niessen, “Brain shift estimation in image-guided neurosurgery using 3-D ultrasound,” IEEE Transactions on Biomedical Engineering, vol. 52, no. 2, pp. 268–276, 2005. View at: Publisher Site | Google Scholar
  11. K. E. Lunn, K. D. Paulsen, D. W. Roberts, F. E. Kennedy, A. Hartov, and J. D. West, “Displacement estimation with co-registered ultrasound for image guided neurosurgery: a quantitative in vivo porcine study,” IEEE Transactions on Medical Imaging, vol. 22, no. 11, pp. 1358–1368, 2003. View at: Publisher Site | Google Scholar
  12. X. Pennec, P. Cachier, and N. Ayache, “Tracking brain deformations in timesequences of 3D US images,” Pattern Recognition Letters, vol. 24, no. 4-5, pp. 801–813, 2003. View at: Google Scholar
  13. I. Reinertsen, M. Descoteaux, K. Siddiqi, and D. L. Collins, “Validation of vessel-based registration for correction of brain shift,” Medical Image Analysis, vol. 11, no. 4, pp. 374–388, 2007. View at: Publisher Site | Google Scholar
  14. I. Reinertsen, F. Lindseth, G. Unsgaard, and D. L. Collins, “Clinical validation of vessel-based registration for correction of brain-shift,” Medical Image Analysis, vol. 11, no. 6, pp. 673–684, 2007. View at: Publisher Site | Google Scholar
  15. A. Roche, A. Pennec, G. Malandain, and N. Ayache, “Rigid registration of 3-D ultrasound with MR images: a new approach combining intensity and gradient information,” IEEE Transactions on Medical Imaging, vol. 20, no. 10, pp. 1038–1049, 2001. View at: Google Scholar
  16. B. C. Porter, D. J. Rubens, J. G. Strang, J. Smith, S. Totterman, and K. J. Parker, “Three-dimensional registration and fusion of ultrasound and MRI using major vessels as fiducial markers,” IEEE Transactions on Medical Imaging, vol. 20, no. 4, pp. 354–359, 2001. View at: Publisher Site | Google Scholar
  17. M. M. J. Letteboer, P. W. A. Willems, M. A. Viergever, and W. J. Niessen, “Non-rigid registration of 3D ultrasound images of brain tumours acquired during neurosurgery,” in Proceedings of the 6th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '03), pp. 408–415, Montreal, Canada, November 2003. View at: Google Scholar
  18. L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever, “Scale and the differential structure of images,” Image and Vision Computing, vol. 10, no. 6, pp. 376–388, 1992. View at: Google Scholar
  19. J. B. Antoine Maintz, P. A. van den Elsen, and M. A. Viergever, “Evaluation of ridge seeking operators for multimodality medical image matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, no. 4, pp. 353–365, 1996. View at: Google Scholar
  20. G. Le Goualher, A. M. Argenti, M. Duyme et al., “Statistical sulcal shape comparisons: application to the detection of genetic encoding of the central sulcus shape,” NeuroImage, vol. 11, no. 5, pp. 564–574, 2000. View at: Publisher Site | Google Scholar
  21. G. Le Goualher, C. Barillot, and Y. Bizais, “Three-dimensional segmentation and representation of cortical sulci using active ribbons,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 11, no. 8, pp. 1295–1315, 1997. View at: Google Scholar
  22. G. Le Goualher, E. Procyk, D. L. Collins, R. Venugopal, C. Barillot, and A. C. Evans, “Automated extraction and variability analysis of sulcal neuroanatomy,” IEEE Transactions on Medical Imaging, vol. 18, no. 3, pp. 206–217, 1999. View at: Publisher Site | Google Scholar
  23. P. Coupé, P. Hellier, X. Morandi, and C. Barillot, “A probabilistic objective function for 3D rigid registration of intraoperative US and preoperative MR brain images,” in Proceedings of the 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '07), pp. 1320–1323, April 2007. View at: Publisher Site | Google Scholar
  24. P. Hellier, P. Coupé, P. Meyer, X. Morandi, and D. L. Collins, “Acoustic shadows detection, application to accurate reconstruction of 3D intraoperative ultrasound,” in Proceedings of the 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '08), pp. 1569–1572, May 2008. View at: Publisher Site | Google Scholar
  25. P. Jannin, X. Morandi, O. J. Fleig et al., “Integration of sulcal and functional information for multimodal neuronavigation,” Journal of Neurosurgery, vol. 96, no. 4, pp. 713–723, 2002. View at: Google Scholar
  26. A. Cachia, J.-F. Mangin, D. Riviere et al., “A generic framework for parcellation of the cortical surface into gyri using geodesic Voronoi diagrams,” Medical Image Analysis, vol. 7, no. 4, pp. 403–416, 2003. View at: Google Scholar
  27. M. E. Rettmann, X. Han, C. Xu, and J. L. Prince, “Automated sulcal segmentation using watersheds on the cortical surface,” NeuroImage, vol. 15, no. 2, pp. 329–344, 2002. View at: Publisher Site | Google Scholar
  28. D. Tosun, M. E. Rettmann, D. Q. Naiman, S. M. Resnick, M. A. Kraut, and J. L. Prince, “Cortical reconstruction using implicit surface evolution: accuracy and precision analysis,” NeuroImage, vol. 29, no. 3, pp. 838–852, 2006. View at: Publisher Site | Google Scholar
  29. J.-F. Mangin, O. Coulon, and V. Frouin, “Robust brain segmentation using histogram scale-space analysis and mathematical morphology,” in Proceedings of the 1st International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '98), vol. 1496 of Lecture Notes in Computer Science, pp. 1230–1241, Springer, Cambridge, Mass, USA, 1998. View at: Google Scholar
  30. P. Coupé, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot, “An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images,” IEEE Transactions on Medical Imaging, vol. 27, no. 4, pp. 425–441, 2008. View at: Publisher Site | Google Scholar
  31. P. Coupé, P. Hellier, X. Morandi, and C. Barillot, “Probe trajectory interpolation for 3D reconstruction of freehand ultrasound,” Medical Image Analysis, vol. 11, no. 6, pp. 604–615, 2007. View at: Publisher Site | Google Scholar
  32. D. L. G. Hill, C. R. Maurer, R. J. Maciunas, J. A. Barwise, J. M. Fitzpatrick, and M. Y. Wang, “Measurement of intraoperative brain surface deformation under a craniotomy,” Neurosurgery, vol. 43, no. 3, pp. 514–526, 1998. View at: Google Scholar
  33. D. W. Roberts, A. Hartov, F. E. Kennedy, M. I. Miga, and K. D. Paulsen, “Intraoperative brain shift and deformation: a quantitative analysis of cortical displacement in 28 cases,” Neurosurgery, vol. 43, no. 4, pp. 749–758, 1998. View at: Publisher Site | Google Scholar
  34. Polaris, Northern Digital Inc., Waterloo, Canada, 2000, http://www.ndigital.com/polaris.html.
  35. D. R. Enzmann, R. Wheat, W. H. Marshall et al., “Tumors of the central nervous system studied by computed tomography and ultrasound,” Radiology, vol. 154, no. 2, pp. 393–399, 1985. View at: Google Scholar
  36. M. A. Hammoud, B. L. Ligon, R. Elsouki, W. M. Shi, D. F. Schomer, and R. Sawaya, “Use of intraoperative ultrasound for localizing tumors and determining the extent of resection: a comparative study with magnetic resonance imaging,” Journal of Neurosurgery, vol. 84, no. 5, pp. 737–741, 1996. View at: Google Scholar
  37. P. D. Le Roux, M. S. Berger, K. Wang, L. A. Mack, and G. A. Ojemann, “Low grade gliomas: comparison of intraoperative ultrasound characteristics with preoperative imaging studies,” Journal of Neuro-Oncology, vol. 13, no. 2, pp. 189–198, 1992. View at: Google Scholar
  38. J. P. McGahan, W. G. Ellis, and R. W. Budenz, “Brain gliomas: sonographic characterization,” Radiology, vol. 159, no. 2, pp. 485–492, 1986. View at: Google Scholar
  39. G. Unsgaard, T. Selbekk, T. Brostrup Müller et al., “Ability of navigated 3D ultrasound to delineate gliomas and metastases—comparison of image interpretations with histopathology,” Acta Neurochirurgica, vol. 147, no. 12, pp. 1259–1269, 2005. View at: Publisher Site | Google Scholar
  40. J. A. Nelder and R. Mead, “A simplex method for function minimization,” The Computer Journal, vol. 7, no. 4, pp. 308–313, 1965. View at: Google Scholar
  41. M. Holden, D. L. G. Hill, E. R. E. Denton et al., “Voxel similarity measures for 3-D serial MR brain image registration,” IEEE Transactions on Medical Imaging, vol. 19, no. 2, pp. 94–102, 2000. View at: Publisher Site | Google Scholar
  42. J. M. Fitzpatrick, D. L. G. Hill, Y. Shyr, J. West, C. Studholme, and C. R. Maurer Jr., “Visual assessment of the accuracy of retrospective registration of MR and CT images of the brain,” IEEE Transactions on Medical Imaging, vol. 17, no. 4, pp. 571–585, 1998. View at: Google Scholar
  43. P. Thévenaz and M. Unser, “Optimization of mutual information for multiresolution image registration,” IEEE Transactions on Image Processing, vol. 9, no. 12, pp. 2083–2099, 2000. View at: Google Scholar
  44. M. R. Kaus, S. K. Warfield, A. Nabavi, P. M. Black, F. A. Jolesz, and R. Kikinis, “Automated segmentation of MR images of brain tumors,” Radiology, vol. 218, no. 2, pp. 586–591, 2001. View at: Google Scholar
  45. M. Prastawa, E. Bullitt, N. Moon, K. van Leemput, and G. Gerig, “Automatic brain tumor segmentation by subject specific modification of atlas priors,” Academic Radiology, vol. 10, no. 12, pp. 1341–1348, 2003. View at: Publisher Site | Google Scholar

Copyright © 2012 Pierrick Coupé et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1949
Downloads1166
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.