Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2012, Article ID 913693, 8 pages
http://dx.doi.org/10.1100/2012/913693
Research Article

Deformable Image Registration with Inclusion of Autodetected Homologous Tissue Features

1Key Laboratory for Health Informatics, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
2Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847, USA

Received 31 October 2011; Accepted 11 December 2011

Academic Editor: Constantin Kappas

Copyright © 2012 Qingsong Zhu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel deformable registration algorithm is proposed in the application of radiation therapy. The algorithm starts with autodetection of a number of points with distinct tissue features. The feature points are then matched by using the scale invariance features transform (SIFT) method. The associated feature point pairs are served as landmarks for the subsequent thin plate spline (TPS) interpolation. Several registration experiments using both digital phantom and clinical data demonstrate the accuracy and efficiency of the method. For the 3D phantom case, markers with error less than 2 mm are over 85% of total test markers, and it takes only 2-3 minutes for 3D feature points association. The proposed method provides a clinically practical solution and should be valuable for various image-guided radiation therapy (IGRT) applications.

1. Introduction

Deformable image registration is playing an increasingly important role in radiation therapy, especially in image-guided radiation therapy (IGRT). It can be used in automatic contour delineation, dose accumulation, and so forth. Many deformable image registration methods are proposed, such as B-spline models [1], finite element method (FEM) [2], optical flow [3], free-form surface-based registration [4], multiresolution optical flow technique [5], regional narrow shell model [6], and demons algorithm with the “active force” [7]. All these investigations are based on image intensity information.

On the other hand, the use of image features has been shown to substantially improve the quality of image registration [8]. For example, in landmark-based registration, feature points are manually selected and associated to construct the optimal transformation between images. Lian et al. [9] used these associated feature points to register CT and MRI images by applying thin-plate splines (TPSs) interpolation. Schreibmann and Xing [10] developed a deformable image registration algorithm in which feature points were selected manually on the template image and then detected automatically on the target image. Although the mapping is automatic, we still need to manually select feature points on the template image, which is a tedious and time-consuming process. In this work, we propose a tissue feature-based deformable algorithm with inclusion of autodetection of feature points on two images. Thousands of feature points are autodetected and automatched together, which significantly improves the accuracy and efficiency of deformable registration.

2. Methods

2.1. Software Platform

The proposed contour mapping algorithm was implemented using the Insight Toolkit (ITK) [11, 12], which is an open-source cross-platform C++ software toolkit sponsored by the National Library of Medicine (NLM). It is freely available for research purposes (http://www.itk.org/). ITK provides various basic algorithms to perform registration and segmentation for medical images. The programs contained in ITK are highly extendable, making it an ideal platform for development of image registration algorithms.

2.2. Overview of the Feature-Based Deformable Registration Process

The proposed feature-based deformable registration algorithm can be divided into three steps. First, feature points are selected on the template and the target image based on the gradient vectors of points in a neighborhood of each point. Then, scale-invariant features transform (SIFT) method is used to associate the corresponding feature point pairs in two images. A bidirectional mapping strategy is also applied to further increase the accuracy of feature point association. Finally, the displacement vector of an arbitrary point on the target image is obtained by interpolating the displacement vectors of the feature points pairs using TPS transformation [13].

2.3. Tissue Feature Association by Using SIFT Method

Detection and association of feature points with distinct tissue feature are implemented by using SIFT method [14, 15], which uses an orientation distribution of intensity gradient vectors in eight quadrants in the neighborhood of the point (containing 8 × 8 × 8 voxels). For each quadrant (4 × 4 × 4 voxels) as illustrated in Figure 1, the gradient components in three orthogonal directions (𝑥-,𝑦-,𝑧-axis) for each of the 64 voxels in a quadrant are computed. Let 𝐼 and 𝐼 represent the image intensity and its intensity gradient, respectively. The gradient components along 𝑥-, 𝑦-, and 𝑧-axis for a voxel(𝑖,𝑗,𝑘)are(1/2)(𝐼𝑖+1,𝑗,𝑘𝐼𝑖1,𝑗,𝑘),(1/2)(𝐼𝑖,𝑗+1,𝑘𝐼𝑖,𝑗1,𝑘),and(1/2)(𝐼𝑖,𝑗,𝑘+1𝐼𝑖,𝑗,𝑘1), respectively. For each of the three planes (𝑥𝑦, 𝑦𝑧, and 𝑧𝑥 planes), an eight-bin histogram of the gradient orientation with 45° interval between 0° and 360° is then constructed. Three eight-bin histograms for one of the quadrants are sketched in Figure 1. A total of 192 vectors are obtained for a given point since each quadrant has 24 vectors, with 8 vectors for each plane. The set of 192 vectors characterize the inherent features and serve as a signature of the point. The SIFT descriptor is considered as one of the most effective descriptors currently available [16].

913693.fig.001
Figure 1: Orientation histogram of SIFT method.

For a given point, indexed by (𝑖,𝑗,𝑘) in the template image, the least-squares difference of the SIFT descriptor of the point and its potential corresponding point(𝑖,𝑗,𝑘)in the target image, 𝑆, is computed according to 𝑆=192𝛼=1||𝐼𝑖,𝑗,𝑘𝛼𝐼𝑖,𝑗,𝑘𝛼||2,(1) where 𝛼 indexes the bins of the SIFT histogram. After the least-square difference 𝑆 is calculated for all points in the target image, two points having the least differences 𝑆1 and 𝑆2 with point(𝑖,𝑗,𝑘)in the template image are identified. If the ratio 𝜅=𝑆1/𝑆2 is less than 50%, the corresponding point(𝑖1,𝑗1,𝑘1)with 𝑆1 is chosen as the correspondence of the point (𝑖,𝑗,𝑘). Otherwise, no association is made to avoid any unphysical matching. More detailed discussions of the 𝜅-ratio can be found in [17].

2.4. Bidirectional Mapping Strategy

To further increase the accuracy of feature point association, a bidirectional mapping strategy is developed based on the fact that if a point in the template image is mapped correctly to the target image, it should be default to be mapped back to the original point in the template image when an inverse map is applied to the corresponding point in the target image. Therefore, after the original association of feature points as described above, the mapped points in target image are inversely coregistered to the template image. If the correspondence still exists, the associated point pair is labeled a match. Otherwise, they are considered as a mismatch and deleted from the list of correspondence points. Upon the association of the feature points, the associated points are employed as control points.

2.5. TPS Deformable Transformation

The process of deformable registration is to warp the template image in such a way that it best matches the target image on a voxel-to-voxel basis. Mathematically, this is an optimization problem, in which a set of transformation parameters transform the voxels in the template image to their corresponding voxels in the target image.

To find the transformation matrix, 𝐓(𝐗), that maps an arbitrary voxel on the template image to that on the target image (or vice versa), a TPS deformable model [18] is employed in this study. In brief, a weighting vector 𝑊=(𝑤1,𝑤2,,𝑤𝑛) and the coefficients 𝑎1, 𝑎𝑢, 𝑎𝑣, and 𝑎𝑤 are computed from a series of matrices which are constructed using 𝑛 pairs of selected control points in the template image (𝑥𝑖,𝑦𝑖,𝑧𝑖) and the target image (𝑢𝑖,𝑣𝑖,𝑤𝑖), respectively. The function transforming a pixel coordinate in the template image to the corresponding coordinate in the target image is defined as𝑓𝑢,𝑣,𝑤=𝑎1+𝑎𝑢𝑢+𝑎𝑣𝑣+𝑎𝑤𝑤+𝑛1𝑖=0𝑤𝑖𝑈||𝑝𝑖||,(𝑢,𝑣,𝑤)(2) where 𝑝𝑖 are control points coordinates in the template image, and 𝑈 is a basis function to measure the distance. Major steps of the TPS calculation include the following.

(1) Assuming𝑃1=(𝑥1,𝑦1,𝑧1),𝑃2=(𝑥2,𝑦2,𝑧2),,𝑃𝑛=(𝑥𝑛,𝑦𝑛,𝑧𝑛) are 𝑛 control points in the template images, the distance between point 𝑖 and 𝑗 is given by 𝑟𝑖𝑗=|𝑃𝑖𝑃𝑗|. Define matrices𝑃=1𝑥1𝑦1𝑧11𝑥2𝑦2𝑧21𝑥𝑛𝑦𝑛𝑧𝑛,𝑟𝐾=0𝑈12𝑟𝑈1𝑛𝑈𝑟21𝑟0𝑈2𝑛𝑈𝑟𝑛1𝑈𝑟𝑛2,𝑃0𝐿=𝐾𝑃𝑇𝑂,(3) where 𝑂 is a 4 × 4 matrix of zeros, and 𝑈 is a basic function 𝑈(𝑟)=𝑟2log𝑟2.

(2) Letting𝑄1=(𝑢1,𝑣1,𝑤1),𝑄2=(𝑢2,𝑣2,𝑤2),,𝑄𝑛=(𝑢𝑛,𝑣𝑛,𝑤𝑛) be 𝑛 corresponding control points in the target image, construct matrices𝑢𝑉=1𝑢2𝑢𝑛𝑣1𝑣2𝑣𝑛𝑤1𝑤2𝑤𝑛,𝑌=𝑉0000𝑇.(4)

The weighting vector 𝑊=(𝑤1,𝑤2,,𝑤𝑛) and the coefficients 𝑎1, 𝑎𝑢, 𝑎𝑣, and 𝑎𝑤 can be computed by the equation𝐿1𝑌=𝑊𝑎1𝑎𝑢𝑎𝑣𝑎𝑤𝑇.(5)

(3) Using the elements of 𝐿1𝑌 to define a function 𝑓(𝑢,𝑣,𝑤) everywhere as given in (2).

2.6. Evaluation of the Method Using Digital Phantom and Clinical Patient Data

The performance of the method has been evaluated by a number of 2D digital phantoms and archived clinical cases. In the digital phantom experiments, deformation was introduced by using a harmonic formula [19]𝑥(𝑥,𝑦)=(1+𝑏cos𝑚𝑞)𝑥.(6) Here, 𝑞=tan1(𝑦/𝑥). Two parameters, 𝑚 and 𝑏, were used to characterize a deformation. Generally, they describe the complexity and magnitude of a deformation, respectively. The accuracy of the proposed algorithm was assessed by directly comparing with the image from the known transformation matrix. A virtue of this approach is that the “ground truth” solutions exist and the transformation matrices are known, thus making the evaluation straightforward. A 3D deformable registration phantom developed by University of Michigan was also used to verify the accuracy of the algorithm.

4D CT images were acquired to test the proposed algorithm using a GE Discovery-ST CT scanner (GE Medical System, Milwaukee, WI) approximately two weeks prior to the initiation of the radiotherapy. The images were transferred through DICOM to a high-performance personal computer (PC) with a Xeon (3.6 GHz) processor for image processing. In general, quantitative validation of a deformable registration algorithm for a clinical case is difficult due to the lack of the ground truth for clinical testing cases. For the cases studied here, visual inspection method was employed to assess the success of the algorithm.

3. Results

3.1. Registration of 2D Digital Phantom

A 2D CT image (the size of the slice is 170170) was used in the evaluation as shown in Figure 2. The target image was generated by deforming the template image along the anterior/posterior (AP) direction using (6).

fig2
Figure 2: Registration result of a 2D digital phantom.

The displacement vector field (DVF) of this experiment is shown in Figure 3. DVFs along AP direction are on the left column, and DVFs along left/right (LR) direction are on the right column. The first row is the DVF calculated by the harmonic formula as the “ground truth.” Since the deformation is along AP direction, the displacement vector along LR direction is uniform with 1.5 cm. The second row is the DVF after registration. It seems from Figures 3(a) and 3(c) that the difference between the analytical and numerical solutions is quite large. However, when the subtraction field was calculated, large errors between the analytical and numerical solutions exist only outside the phantom as shown in Figure 3(e), since no control points are outside the phantom. The largest error is about 3.0 cm. The fusion image between the subtraction field and the template image is shown in the fourth row of Figure 3. It illustrates that displacement errors are small inside the phantom.

fig3
Figure 3: Displacement vectors fields of the 2D digital phantom.

Figure 4 shows the error distribution of the length of displacement vectors in the phantom case. A total of 13000 pixels were calculated; the maximum and the mean error are 1.7 cm and 0.26 cm, respectively. Errors larger than 1.0 cm exist only in 512 pixels, which is less than 4% of total pixels.

913693.fig.004
Figure 4: Histogram of the displacement error distribution of the 2D phantom.
3.2. Registration of 3D Deformable Phantom

A 3D deformable phantom [20, 21] was used to verify the proposed method. The phantom consists of a skeleton and a lung-equivalent insert. Tumor-simulating inserts of varying density and size were embedded in the foam. The structures selected were rigid objects of known shape (balls) and various compositions. The insert was evaluated for relative attenuation using a commercial CT scanner. An actuator-driven diaphragm can compress/decompress the foam to generate 4D CT images.

Figure 5 shows the fusion images between the inspiration and the expiration phase. The inspiration phase, the expiration phase, and the overlapped region between these two phases are displayed in red, green, and yellow, respectively. Large position differences were observed before registration. The distance of a tumor (indicated by red arrows) between two phases was 1.8 cm. These differences almost disappeared after registration.

fig5
Figure 5: Fusion image between the inspiration and the expiration phase of a 3D deformable phantom.

The proposed algorithm was compared to other deformable registration methods by using the same phantom. These methods include TPS with manual selection of feature points [22], single-resolution B-splines [23], multiresolution B-splines [2427], “demons” algorithm [7, 28], fluid flow [29, 30], and free form deformation with calculus of variations [19]. Our method was the only method that markers with error less than 2 mm are over 85% of total tested markers, and it demonstrates the high accuracy of the method [21].

3.3. Registration of 4D CT Images

The proposed method was applied in clinical 4D CT images as shown in Figure 6. The fusion images before registration are on the left column, and the fusion images after registration are on the right column. The inspiration phase, the expiration phase, and the overlapped region between these two phases are displayed in red, green, and yellow, respectively. On the axial view in Figure 6(a), large difference appears in the diaphragm region, while the difference almost disappears after registration in Figure 6(b). On the frontal and sagittal view in Figures 6(c) and 6(e), the respiration movement can be clearly observed from the position change of diaphragm and trachea, while these structures are overlapped after registration in Figures 6(d) and 6(f).

fig6
Figure 6: Image registration results of 4D CT images.

Figure 7 shows an example of feature points detection and association. 7581 feature points in the inspiration phase and 7625 in the expiration phase are detected. After the SIFT mapping, 468 control point pairs are associated, which are about 6% of the total feature points. The small percentage is due to the small 𝜅-ratio of less than 50%. More detailed discussions of the 𝜅-ratio can be found in [17].

fig7
Figure 7: An example of feature points detection and association.

4. Discussion

Most, if not all, registration algorithms ignore the underlying tissue features but rely on the similarity of image intensity. In contrast to intensity-based image registration, the feature-based registration extracts information regarding image structure, including shape and texture. Therefore, the feature-based image registration is generally more effective in detecting feature points. Features contained in a small control volume around a point can be used as a signature of the point [10]. The use of SIFT descriptors [15, 31, 32] is an alternative and potentially more advantageous way to associate two input images before deformable registration. The full automatic feature point detected and association make it ideally suitable for deformable registration.

It is important to address that the proposed registration method is not determined only based on the individual point information, but also based on the information about the relationship between points (e.g., continuity). In practice, although control points are discrete, the gradient vector of each point includes the continuity information.

For different purposes, control points are positioned in different regions. For instance, if we seek to separate bone and soft tissues in deformable registration, the control points are positioned in bone area. If our objective is to track respiration movement, then the control points should be positioned in the lung. Therefore, we need to define different thresholds for different tissues. For instance, the intensity threshold for bone is above 100, while for pancreas, the threshold is between 0 and 100.

Compared to B-spline deformable registration, no iterative procedure is needed in the method, and the calculation speed is at least ten times faster than B-spline registration. Commonly, it takes only 2-3 minutes for control point matching, while it may take 1 hour for B-spline registration with the same accuracy.

5. Conclusion

The novelties of this work include (1) seamlessly incorporation of the detected tissue feature information into accurate and robust registration and the accuracy within one voxel can be achieved; (2) without iteration procedure, the calculation speed of the proposed method can be much faster than B-spline image registration.

Inclusion of a prior anatomical knowledge is a key step in bringing the currently available deformable registration to the next level. The proposed method provides a clinically practical solution and should be valuable for various IGRT applications.

Conflict of Interests

No actual or potential conflict of interests exists.

Acknowledgments

This work is supported in part and sponsored by National Natural Science Foundation of China (81171402), NSFC Joint Research Fund for Overseas Research Chinese, Hong Kong and Macao Young Scholars (30928030), Guangdong Innovation Research Team Fund for Low-Cost Healthcare Technologies (GIRTF-LCHT, Y07101l001), National Basic Research Program 973 (2010CB732606) from Ministry of Science and Technology, China, and Shenzhen Fundamental Research (JC201005270312A).

References

  1. E. Schreibmann and L. Xing, “Narrow band deformable registration of prostate magnetic resonance imaging, magnetic resonance spectroscopic imaging, and computed tomography studies,” International Journal of Radiation Oncology Biology Physics, vol. 62, no. 2, pp. 595–605, 2005. View at Publisher · View at Google Scholar · View at Scopus
  2. T. Zhang, N. P. Orton, T. R. Mackie, and B. R. Paliwal, “Technical note: a novel boundary condition using contact elements for finite element deformable image registration,” Medical Physics, vol. 31, no. 9, pp. 2412–2415, 2004. View at Publisher · View at Google Scholar · View at Scopus
  3. T. Guerrero, G. Zhang, T.-C. Huang, and K.-P. Lin, “Intrathoracic tumour motion estimation from CT imaging using the 3D optical flow method,” Physics in Medicine and Biology, vol. 49, no. 17, pp. 4147–4161, 2004. View at Publisher · View at Google Scholar
  4. W. Lu, G. H. Olivera, Q. Chen, M. L. Chen, and K. J. Ruchala, “Automatic re-contouring in 4D radiotherapy,” Physics in Medicine and Biology, vol. 51, no. 5, pp. 1077–1099, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. G. G. Zhang, T. C. Huang, T. Guerrero et al., “Use of three-dimensional (3D) optical flow method in mapping 3D anatomic structure and tumor contours across four-dimensional computed tomography data,” Journal of Applied Clinical Medical Physics, vol. 9, no. 1, article 2738, 2008. View at Google Scholar · View at Scopus
  6. M. Chao, T. Li, E. Schreibmann, A. Koong, and L. Xing, “Automated contour mapping with a regional deformable model,” International Journal of Radiation Oncology Biology Physics, vol. 70, no. 2, pp. 599–608, 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Wang, L. Dong, J. O'Daniel et al., “Validation of an accelerated 'demons' algorithm for deformable image registration in radiation therapy,” Physics in Medicine and Biology, vol. 50, no. 12, pp. 2887–2905, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. L. Xing, B. Thorndyke, E. Schreibmann et al., “Overview of image-guided radiation therapy,” Medical Dosimetry, vol. 31, no. 2, pp. 91–112, 2006. View at Publisher · View at Google Scholar · View at Scopus
  9. J. Lian, L. Xing, S. Hunjan et al., “Mapping of the prostate in endorectal coil-based MRI/MRSI and CT: a deformable registration and validation study,” Medical Physics, vol. 31, no. 11, pp. 3087–3094, 2004. View at Publisher · View at Google Scholar · View at Scopus
  10. E. Schreibmann and L. Xing, “Image registration with auto-mapped control volumes,” Medical Physics, vol. 33, no. 4, pp. 1165–1179, 2006. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” International Journal of Computer Vision, vol. 1, no. 4, pp. 321–331, 1988. View at Publisher · View at Google Scholar · View at Scopus
  12. L. Ibanez, W. Schroeder, and L. Ng, ITK Software Guide, Kitware Inc., 20003.
  13. K. Rohr, “Image registration based on thin plate splines and local estimates of anisotropic landmark localization uncertainties,” in Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI '98), 1998.
  14. D. G. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), pp. 1150–1157, Corfu, Greece, September 1999. View at Scopus
  15. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. View at Publisher · View at Google Scholar · View at Scopus
  16. K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Xie, M. Chao, P. Lee, and L. Xing, “Feature-based rectal contour propagation from planning CT to cone beam CT,” Medical Physics, vol. 35, no. 10, pp. 4450–4459, 2008. View at Publisher · View at Google Scholar · View at Scopus
  18. F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 6, pp. 567–585, 1992. View at Google Scholar · View at Scopus
  19. W. Lu, M. L. Chen, G. H. Olivera, K. J. Ruchala, and T. R. Mackie, “Fast free-form deformable registration via calculus of variations,” Physics in Medicine and Biology, vol. 49, no. 14, pp. 3067–3087, 2004. View at Publisher · View at Google Scholar · View at Scopus
  20. R. Kashani, K. Lam, D. Litzenberg, and J. Balter, “Technical note: a deformable phantom for dynamic modeling in radiation therapy,” Medical Physics, vol. 34, no. 1, pp. 199–201, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Kashani, M. Hub, J. M. Balter et al., “Objective assessment of deformable image registration in radiotherapy: a multi-institution study,” Medical Physics, vol. 35, no. 12, pp. 5944–5953, 2008. View at Publisher · View at Google Scholar · View at Scopus
  22. M. M. Coselmon, J. M. Balter, D. L. McShan, and M. L. Kessler, “Mutual information based CT registration of the lung at exhale and inhale breathing states using thin-plate splines,” Medical Physics, vol. 31, no. 11, pp. 2942–2948, 2004. View at Publisher · View at Google Scholar · View at Scopus
  23. T. Hartkens, D. Rueckert, J. A. Schnabel, D. J. Hawkes, and D. L. G. Hill, “TK CISG Registration Toolkit: an open source software package for affine and non-rigid registration of single- and multimodal 3D images,” in Proceedings of the Workshop on Bildverarbeitung für die Medizin, Informatik Aktuell (BVM '02), 2002.
  24. M. L. Kessler, “Image registration and data fusion in radiation therapy,” British Journal of Radiology, vol. 79, pp. S99–S108, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. J. A. Schnabelt, D. Rueckert, M. Quis et al., “A Generic framework for non-rigid registration based on non-uniform multi-level free-form deformations,” in medical Image Computing and Computer-Assisted Intervention (MICCAI '01), vol. 2208/2001, pp. 573–581, 2001. View at Publisher · View at Google Scholar
  26. A. Roche, X. Pennec, M. Rudolph et al., “Generalized correlation ratio for rigid registration of 3D ultrasound with MR images,” in Proceedings of the 3rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '00), vol. 1935 of Lecture Notes in Computer Science, pp. 567–577, 2000.
  27. J. Kybic and M. Unser, “Fast parametric elastic image registration,” IEEE Transactions on Image Processing, vol. 12, no. 11, pp. 1427–1442, 2003. View at Publisher · View at Google Scholar · View at Scopus
  28. J. P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell's demons,” Medical Image Analysis, vol. 2, no. 3, pp. 243–260, 1998. View at Google Scholar · View at Scopus
  29. B. C. Davis, M. Foskey, J. Rosenman, L. Goyal, S. Chang, and S. Joshi, “Automatic segmentation of intra-treatment CT images for adaptive radiation therapy of the prostate,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI '05), vol. 3749 of Lecture Notes in Computer Science, pp. 442–450, Palm Springs, Calif, USA, October 2005.
  30. M. Foskey, B. Davis, L. Goyal et al., “Large deformation three-dimensional image registration in image-guided radiation therapy,” Physics in Medicine and Biology, vol. 50, no. 24, pp. 5869–5892, 2005. View at Publisher · View at Google Scholar · View at Scopus
  31. T. Lindeberg, “Feature detection with automatic scale selection,” International Journal of Computer Vision, vol. 30, no. 2, pp. 79–116, 1998. View at Google Scholar · View at Scopus
  32. T. Lindeberg, “dge detection and ridge detection with automatic scale selection,” International Journal of Computer Vision, vol. 30, no. 2, pp. 117–154, 1998. View at Google Scholar · View at Scopus