BioMed Research International

BioMed Research International / 2020 / Article
Special Issue

Representation Learning in Radiology

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 5615371 | https://doi.org/10.1155/2020/5615371

Shibin Wu, Pin He, Shaode Yu, Shoujun Zhou, Jun Xia, Yaoqin Xie, "To Align Multimodal Lumbar Spine Images via Bending Energy Constrained Normalized Mutual Information", BioMed Research International, vol. 2020, Article ID 5615371, 11 pages, 2020. https://doi.org/10.1155/2020/5615371

To Align Multimodal Lumbar Spine Images via Bending Energy Constrained Normalized Mutual Information

Academic Editor: Tobias De Zordo
Received09 Apr 2020
Accepted15 Jun 2020
Published11 Jul 2020

Abstract

To align multimodal images is important for information fusion, clinical diagnosis, treatment planning, and delivery, while few methods have been dedicated to matching computerized tomography (CT) and magnetic resonance (MR) images of lumbar spine. This study proposes a coarse-to-fine registration framework to address this issue. Firstly, a pair of CT-MR images are rigidly aligned for global positioning. Then, a bending energy term is penalized into the normalized mutual information for the local deformation of soft tissues. In the end, the framework is validated on 40 pairs of CT-MR images from our in-house collection and 15 image pairs from the SpineWeb database. Experimental results show high overlapping ratio (in-house collection, vertebrae , blood vessel ; SpineWeb, vertebrae , blood vessel ) and low target registration error (in-house collection, ; SpineWeb, ) are achieved. The proposed framework concerns both the incompressibility of bone structures and the nonrigid deformation of soft tissues. It enables accurate CT-MR registration of lumbar spine images and facilitates image fusion, spine disease diagnosis, and interventional treatment delivery.

1. Introduction

Spine is the backbone of body trunk. It protects the most significant nerve pathway in the spinal cord and the body. On the other hand, spine injury and disorders affect up to 80% world population and may cause deformity and disability, which become a major health and social problem [13]. For instance, the lumbar degenerative disease accompanied by pathological changes might result in lumbocrural pain, neural dysfunction, instability of facet joints, and spino-pelvic sagittal imbalance, and thus, the quality of life decreases dramatically. In addition, due to the aging population, the global burden relating to spinal disease remedy is expected to raise significantly in the next decades.

To align intrapatient multimodal images, such as computerized tomography (CT) and magnetic resonance (MR), benefits clinical diagnosis, treatment planning, and delivery for lumbar spinal diseases [4, 5]. However, few methods were dedicated to matching lumbar spine images. Panigrahy et al. developed a method for CT-MR cervical spine images which needed anatomical landmarks to guide image registration [6]. Palkar and Mishra combined different orthogonal wavelet transforms with various transform sizes for CT-MR spine image fusion, while interactive localization of control points was required [7]. Tomazevic et al. implemented an approach for rigid alignment of volumetric CT or MR to X-ray images [8]. To simplify the registration problem in real-world scenarios, images were acquired from a cadaveric lumbar spine phantom and three-dimensional (3D) images contained only one of the five vertebrae. Otake et al. proposed a registration method for 3D and planar images which was used for spine intervention and vertebral labeling in the presence of anatomical deformation [9]. Harmouche et al. designed an articulated model for MR and X-ray spine images [10]. Hille et al. presented an interactive framework, and rough annotation of the center regions in different modalities was used to guide the registration [11].

Accurate alignment of intrapatient CT-MR images is challenging. From the anatomy, human spine consists of inflexible vertebrae surrounded by soft tissues, such as nerves, vessels, and muscles. Moreover, the vertebrae of lumbar spine are connected by facet joints in the back, which allows for forward and backward extension and twisting movements. Moreover, spinal deformity imposes difficulties on multimodal image registration. Specifically, during image acquisition, patients can lay flatly for a short time due to pain, and subsequently, motion becomes unavoidable. Last but not the least, there are intrinsic differences between CT and MR imaging.

Figure 1 shows a pair of intrapatient CT-MR images. It is found that in CT images, the lumbar spine region easily highlights itself from the rest of soft tissues (the top row), while in MR images, soft tissues show various intensities and in particular, it might be hard to distinguish rigid bones from soft tissues (the bottom row). In the figure, soft tissues in MR images are with various contrast than those in CT images (red arrows), undesirable artifacts caused by the bias field are observed in MR images (green arrows), and these pairs of images show different imaging field of views. It is obvious that these facets pose difficulties in image registration.

Image registration is important in medical image analysis [12, 13, 14]. Based on similarity metrics, registration methods could be generally classified into intensity- and feature-based methods. Among the intensity-based methods, mutual information (MI) is well known, and it was primarily presented for MR breast image alignment [15]. Afterwards, the metric is used in multimodal medical image registration [16]. For specific applications, MI has been modified to enhance the performance of image registration. For instance, normalized MI (NMI) was proposed for invariant entropy measure [17], regional MI was implemented to capture volume changes when local tissue contrast varied in serial MR images [18], localized MI was designed for atlas matching and prostate segmentation [19], conditional MI was developed to incorporate joint histogram and intensity distribution for image description [20], self-similarity weighted αMI was presented for handheld ultrasound and MR image alignment [21], and MI was also advanced with spatially encoded information [22].

Feature-based methods aim to quantify detected landmarks with features for image registration. Ou et al. collected multiscale multiorientation Gabor features to weight mutual-saliency points for matching [23]. Zhang et al. used scale-invariant features and corner descriptors for lung image registration [24]. Heinrich et al. designed modality independent neighborhood descriptor (MIND) which extracted the distinctive structure in small image patches for multimodal deformation registration [25]. Via principal component analysis of deformation, a low-dimension statistical model was learned [26]. Toews et al. combined invariant features of volumetric geometry and appearance for image alignment [27]. Determined by the moments of image patches, a self-similarity inspired local descriptor was presented [28]. Jiang et al. designed a discriminative local derivative pattern which encoded images of different modalities into similar representation [29]. Woo et al. combined spatial and geometric context of detected landmarks [30], and Carvalho et al. considered intensity and geometrical features [31] into a similarity metric. Weistrand and Svensson constrained image registration with anatomical landmarks for local tissue deformation [32].

Embedding a proper penalty term into a similarity metric is helpful in specific applications. Rueckert et al. used a term to regularize the local deformation to be smooth in breast MR image registration [33]. Rohlfing et al. designed a local volume preservation constraint, assuming the soft tissues incompressible in small deformation [34]. Staring et al. proposed a rigidity penalty and modeled the local transform when thorax images with tumors were aligned [35]. To model fetal brain motion, Chen et al. utilized the total-variation regularization and a penalty was adopted toward piece-wise convergence [12]. Due to local tissue rigidity characteristics, Ruan et al. added a regularization term for aligning inhale-exhale CT lung images [36]. Fiorino et al. designed the Jacobian-volumehistogram of deforming organs to evaluate the parotid shrinkage [37].

This study proposes a coarse-to-fine framework to address the registration of intrapatient CT-MR images of lumbar spine. It develops a similarity metric that penalizes a bending energy term into NMI for local deformation of soft tissues. The most similar work is from the comparison of bending energy penalized global and local MI metrics in aligning positron emission tomography and MR images [38], while this study differs itself from the proposed coarse-to-fine registration framework, the bending energy penalized NMI (BEP-NMI) and the application to CT-MR lumbar spine images.

3. Materials and Methods

3.1. Data Collection

Two data sets were analyzed. One is our in-house collection which contains 40 pairs of lumbar spine images from the Department of Radiology, Shenzhen Second People’s Hospital, the First Affiliated Hospital of Shenzhen University. CT images were acquired through SIEMENS SOMATO. The voxel resolution is , and the matrix size is with slices. T2-weighted MR images were acquired using a 1.5 Tesla scanner (SIEMENS Avanto). The physical resolution is , the matrix size is , and the slice number ranges between 60 and 75.

The other data set is accessible online, namely SpineWeb (http://spineweb.digitalimaginggroup.ca). It includes 15 image pairs of lumbar spine. The physical resolution of CT images is , the image size is , and the slice number is 77 per volume. The resolution of T1-weighted MR images is , the image size is , and each volume contains 42 slices.

3.2. The Proposed Framework

The proposed framework consists of two steps both of which use intensity-based image registration methods. An intensity-based registration method can be treated as an optimization problem, and the similarity metric performs as the cost function. Given a fixed image and a moving image in 3D space, image registration aims for mapping the moving image to the space of the fixed image guided by the metric . When an additional regularization term of is penalized into , the registration problem can be formulated as,

where is a transform model, compromises the metric and the regularity term , is the transform coefficients, and is the initialized model by .

Figure 2 illustrates the proposed framework. It indicates a rigid registration stage and a hierarchical deformation stage, and NMI and BEP-NMI, respectively, perform as the similarity metric. Moreover, adaptive stochastic gradient descent (ASGD) [39] is applied for hyperparameter optimization. Specifically, an affine transformation with 12 degrees of freedom is employed in the first stage, and a B-spline elastic model is used for free-form deformation in the second stage.

3.2.1. Rigid Registration

An affine transform model is used here. The transform can be formulated by

where is a matrix that contains the rotation, scale, and shear coefficients, is the center of rotation, is a translation vector, and is a vector of 12 degrees of freedom in volumetric image registration.

Rigid registration attempts for global positioning of the whole body, and thus, an initial alignment of lumbar spine. A 3-level recursive pyramid denotes smoothing that downsamples the source volumes by a factor of 2. Besides, the metric NMI and the affine transform are employed in each scale.

3.2.2. Hierarchical Deformation

Hierarchical deformation is a coarse-to-fine adjustment procedure [40]. This setup utilizes Gaussian pyramid without downsampling to match images from the global structures toward the fine details.

B-spline transform. The B-splines are used to depict the local shape difference between the lumbar vertebrae. To construct the B-spline based free-form deformation model, let be a spatial domain of a 3D image. A lattice () of control points is denoted as , spanning the integer grid in , and denotes the control point at () on the mesh . Then, the elastic model can be expressed as a 3D tensor product of the uniform B-spline of order 3 as below,where , and repents the lth basis function of the B-spline,

where . The basic functions weigh the contribution of each control point to based on its distance to the point .

Since the B-splines can be locally controlled, it makes the computation efficient for a large number of control points. In particular, changing a control point affects only the transforms of its local neighborhood.

BEP-NMI. The metric MI is preferred in multimodal image registration. Given and with intensity bins of and , MI is quantified from a joint probability function and marginal probability distribution functions.

of and . The metric MI between a pair of images, and , can be described aswhere and are the marginal entropy and the is the joint entropy of and .

The metric NMI is more robust to the change of overlapped tissue regions. It uses a Parzen-window approach to estimate the probability density function. The entropy of a fixed image is defined as , where is a probability distribution estimated by using Parzen-windows. The entropy of a moving image can be computed in a similar way. And subsequently, the NMI between and can be presented as

In order to regularize the B-spline deformation and to prevent the rigid structures from being smoothed, a BEP term is added to the NMI. The new cost function, BEP-NMI, is formulated aswhere and are predefined constants to weigh between global similarity and local regularity. In this study, off-line experiments indicated that was a good choice.

The penalty terms are commonly based on the first or second-order spatial derivatives of the transform [35, 36]. In this study, the BEP term is composed of the second-order derivatives [35, 40] in the volumetric space,where is a 3D image. The Equation (8) can be approximated as a discretely sampled sum over the volume as below,where is the number of voxels in , and denotes a sum of the squared second-order derivatives of inside the integral part in Equation (8) at a voxel location . Specially, the derivative approximation with finite differences can be restricted to the local neighborhood of the control point.

Optimization. Given an initial parameter , an optimization algorithm updates an incremental to reduce the cost function iteratively. ASGD is used in the study, since it runs faster and less likely to get trapped in the local minima when compared to other gradient-based optimization algorithms [39]. Notably, ASGD implemented in the elastix package (http://elastix.isi.uu.nl) is used for adaptive step size prediction and the initial parameters are set as those in [39, 40].

3.3. A Comparison Method

The MIND is a feature-based method and it has been widely used in multimodal deformable registration [25, 41]. It aims to represent the distinctive image structure in a local neighborhood and explore the similarity of small image patches by using Gaussian-weighted patch distances [25].

MIND can be formulated by a distance , a spatial search region and a variance estimate as below,

where is a normalization constant, the search region, a convolution filter of size , a convolution filter, and a dense sampling on . As such, an image can be represented by a vector of size at each location . Moreover, can be computed based on a mean of the patch distances within a small neighborhood

In Equation (10) to Equation (12), denotes a six-connected neighborhood and indicates a volume block.

The similarity metric used in MIND comes from the sum of absolute difference. To the fixed image () and the moving image (), the local difference at a voxel is

The default value of is 6 and it means 6-connected neighbors are taken into computation.

3.4. Performance Evaluation
3.4.1. Tissue Overlapping

Tissue overlapping quantifies the overlapping ratio of outlined tissue regions in the fixed and its aligned image, which can distinguish the reasonable from the poor registration [42, 43]. This study focuses on the region of lumbar vertebrae and blood vessels. Assuming the outlined tissues in the fixed and aligned image are, respectively, denoted as and , the voxel-wise Jaccard () index and Dice () coefficient can be, respectively, described aswhere indicates the number of voxels per volume.

3.4.2. Target Registration Errors

As for landmark annotation, ImageJ (http://imagej.nih.gov/ij/) was used. A pair of CT-MR images are displayed side-by-side. Then, landmarks are identified and manually annotated by an imaging radiologist (3+ year experience) and further confirmed by a senior radiologist (10+ year experience). Once landmarks are annotated, their locations in 3D space are recorded. In this study, anatomical landmark points are localized on the vertebral body center (VBC), neural edge (NE), disc center (DC), and blood vessel edge (BVE).

Target registration error (TRE) evaluates the distance between anatomical point pairs in the fixed and moving image. Here, assuming and , respectively, denotes the corresponding landmark point pairs in the fixed and moving image, the mean for a given is defined aswhere is the number of pairs of landmark, and indicates Euclidian distance in 3D space.

3.5. Software and Platform

The whole framework is implemented with Insight Segmentation and Registration Toolkit (http://www.itk.org) and the elastix package [40]. Experiments are performed on a desktop computer equipped with dual-core Intel i7 CPU (3.70 GHz) and 16 GB RAM memory.

4. Results

4.1. Tissue Overlapping

Figure 3 illustrates the tissue overlapping measure J of CT-MR image registration on the in-house collection (left) and the SpineWeb (right). The left shows that the proposed framework outperforms the MIND method on the vertebrae ( versus ) and blood vessel ( versus ) overlapping. In the right figure, the framework achieves higher values (vertebrae, ; blood vessel, ) than the MIND method (vertebrae, ; blood vessel, ), and thus, it leads to better performance.

Figure 4 shows the overlapping ratio D of multimodal image registration on the in-house collection (left) and the SpineWeb (right). The left figure indicates that the coarse-to-fine registration framework obtains better results than the MIND method on the vertebrae ( versus 0.77±0.05) and blood vessel ( versus ) overlapping. In the right figure, the MIND method (vertebrae, ; blood vessel, ) obtains inferior performance than the proposed framework (vertebrae, ; blood vessel, ).

4.2. Target Registration Errors

Figure 5 demonstrates the mean TRE value of anatomical landmark points between the proposed framework and the MIND algorithm on the in-house collection. The error-bar plot indicates that the TRE of the proposed framework is less than 3.00 mm (DC), while that of the MIND algorithm is larger than 4.00 mm (VBC) on average. In addition, statistical analysis indicates that the proposed framework significantly outperforms the MIND algorithm in each of the four sets of landmarks (, two-sample -test).

Table 1 shows the TRE values (, ) with respect to different landmark sets. The coarse-to-fine framework achieves TRE between (BVE) and (DC), while the TRE of the MIND method ranges from (BVE) to (DC), correspondingly larger than that from the proposed framework.


The framework (mm)MIND (mm)

VBC
NE
DC
BVE

The mean TRE on the SpineWeb dataset is shown in Figure 6. It is observed that the TRE value of the proposed framework is less than 3.00 mm (VBC and NE), while the MIND algorithm leads to the TRE values larger than 5.00 mm.

Statistical analysis indicates significant difference of the TRE values between the proposed framework and the MIND algorithm on aligning the pairs of VBC and BVE landmarks (, two-sample -test).

Table 2 summarizes the mean TRE values on different sets of landmark pairs. The proposed framework achieves the TRE values between (BVE) to (VBC), and the TRE values of the MIND algorithm ranges from (BVE) to (VBC).


The framework (mm)MIND (mm)

VBC
NE
DC
BVE

4.3. Perceived Quality of Image Alignment

Visual assessment of registration quality is perceived from the fusion of CT and MR images and observed from three perspective views in Figure 7, where are the CT image, are the MR image, are the aligned image from the proposed framework, and are the aligned image from the MIND algorithm. Red arrows directing to the soft tissue regions and green arrows directing to the bone regions are used for comparison. Before registration, both bones and tissues are misaligned, such as acantha , bones and nerves and muscles . After image registration, the proposed framework aligns these parts in the MR images with fine deformation to the CT images. Specifically, both rigid bones and soft tissues are well matched, and the anatomical textures shows consistent distributions in the aligned image. On contrary, the MIND algorithm fails to overlap the acantha (), bones and nerves () and muscles () accurately.

4.4. Computation Time

Based on the software and platform, it took about 62 seconds to complete the affine registration and 427 seconds to complete the deformable registration. And thus, it required a total of 8.15 minutes to fulfill the coarse-to-fine registration for a pair of CT-MR lumbar spine image.

5. Discussion

Intrapatient multimodal image registration can fuse multisource information that benefits disease diagnosis and treatment delivery. This study develops a coarse-to-fine framework and aligns intrapatient CT-MR lumbar spine images. It first utilizes the similarity metric NMI for global positioning, and then, bending energy penalized NMI for local deformation of soft tissues. The proposed framework achieves high tissue overlapping ratio and low target registration error. It not only preserves the incompressibility of vertebrae but also well matches local soft tissues that provide accurate elastic registration of lumbar spine images for clinical applications.

The proposed framework is a coarse-to-fine approach for multimodal image registration. It aligns anatomical structures and addresses the potential difference on the fields of view and the intrinsic differences between medical imaging. The metric NMI is used, since it is a robust and accurate measure in multimodal image registration [17, 44]. After global positioning, a new similarity metric that integrates a bending energy term into NMI is used for local deformation and registration of soft tissues in medical images. It is worth of note that the term encourages smooth displacements in registration [33]. Ceranka et al. embedded the term to improve multiatlas segmentation of the skeleton from whole-body MR images [45], and de Vos et al. integrated the term into unsupervised affine and deformable image registration by using a convolutional neural network [46]. Both works [45, 46] figured out that the term caused significantly less folding in image registration.

The framework takes the incompressibility of vertebrae into account. Vertebrae are bony structures which are connected to each other by the ligamentum flavum at the neural arch [47]. The proposed framework enables global and local image structures well matched, and inflexible bones and soft tissues properly deformed. Its superior performance has been verified on the in-house collection and the SpineWeb database. Experiential results demonstrate that the overlapping ratio of annotated vertebrae and blood vessels are larger than 0.85, and the target registration error is less than 2.40 mm on average. It outperforms the MIND algorithm partly due to its proper deformation of local soft tissues and incompressible lumbar vertebrae. The registration quality is further perceived in a CT-MR image pair. It is found that the marked tissues keep relative location after image registration by using the proposed framework, since it not only well tackles the local soft tissue deformation but also conserves the rigid lumbar vertebrae.

Even if the proposed framework achieves superior performance on aligning CT-MR lumbar spine images, there is still room for further improvement. One way to enhance registration accuracy is by transferring multimodal image registration into mono-modal image registration. Wachinger and Navab developed structural representations, such as Entropy and Laplacian images, which could represent the images in a third space where the images showed close intensity or gradient distribution [48]. Moreover, deep networks have been explored to estimate CT images from MR images directly and in particular, the mapping between CT and MR images was learned without any patch-level pre- or postprocessing [49]. Another straightforward way is to utilize deep networks to learn the deformation field between different imaging modalities [50]. In addition, interactive image registration is admirable in interventional surgery and a doctor user could localize landmarks to guide and to update the registration procedure [51].

There are several limitations in this study. One limitation comes from no comparison of NMI and BEP-NMI on deformable image deformation, since our off-line experimental results show that the NMI based deformable registration is prone to distortion of lumbar spine and unnatural deformation of soft tissues. Moreover, demons and its variants [52, 53, 54] failed in the registration of lumbar spine images. Thus, this study reports the performance of the proposed framework and the MIND method. In addition, how to properly balance the BEP term and the NMI is always a problem and no existing methods could well tackle this issue, while prior knowledge [35, 37] could be employed for further improvement of the registration accuracy.

6. Conclusions

This paper presents a coarse-to-fine framework for the registration of intrapatient CT-MR lumbar spine images. It integrates the bending energy term into normalized mutual information for fine deformation of soft tissues around the incompressible vertebrae. Its high performance benefits multisource information fusion for accurate spine disease diagnosis, treatment planning, interventional surgery, and radiotherapy delivery.

Data Availability

The in-house collection of MR-CT image pairs used to support the findings of this study are restricted by the Medical Ethics Committee of Shenzhen Second People’s Hospital in order to protect patient privacy. The SpineWebdata set of MR-CT images used to support the findings is freely available online (https://spineweb.digitalimaginggroup.ca/spineweb/index.php?n=Main.Datasets). If interested, requests for access to these data can be made to the author Shibin Wu (https://sb.wu@siat.ac.cn). Since the database is freely available, requests for access to these data can also be made to the author Shibin Wu (https://sb.wu@siat.ac.cn).

Conflicts of Interest

The authors declare there is no conflict of interest. The founding sponsors had no role in the design of this study, in the collection, analysis or interpretation of data, in the writing of this manuscript, nor in the decision to publish the experimental results.

Acknowledgments

The authors would like to thank the editor and anonymous reviewers for their invaluable comments that have helped to improve the paper quality. This work is supported in part by grants from Shenzhen matching project (GJHS20170314155751703), National Key Research and Develop Program of China (2016YFC0105102); the National Natural Science Foundation of China (61871374); the Leading Talent of Special Support Project in Guangdong (2016TX03R139); the Science Foundation of Guangdong (2017B020229002, 2015B02023301); and CAS Key Laboratory of Health Informatics.

References

  1. G. Zheng and S. Li, “Medical image computing in diagnosis and intervention of spinal diseases,” Computerized Medical Imaging and Graphics, vol. 45, pp. 99–101, 2015. View at: Publisher Site | Google Scholar
  2. F. Raciborski, R. Gasik, and A. Klak, “Disorders of the spine. A major health and social problem,” Annals of Physics, vol. 4, no. 4, pp. 196–200, 2016. View at: Publisher Site | Google Scholar
  3. G. Zhang, Y. Yang, Y. Hai, J. Li, X. Xie, and S. Feng, “Analysis of Lumbar Sagittal Curvature in Spinal Decompression and Fusion for Lumbar Spinal Stenosis Patients under Roussouly Classification,” BioMed Research International, vol. 2020, 8 pages, 2020. View at: Publisher Site | Google Scholar
  4. A. Toussaint, A. Richter, F. Mantel et al., “Variability in spine radiosurgery treatment planning - results of an international multi-institutional study,” Radiation Oncology, vol. 11, no. 1, p. 57, 2016. View at: Publisher Site | Google Scholar
  5. P. A. Helm, R. Teichman, S. L. Hartmann, and D. Simon, “Spinal navigation and imaging: history, trends, and future,” IEEE Transactions on Medical Imaging, vol. 34, no. 8, pp. 1738–1746, 2015. View at: Publisher Site | Google Scholar
  6. A. Panigrahy, S. Caruthers, J. Krejza et al., “Registration of three-dimensional MR and CT studies of the cervical spine,” American Journal of Neuroradiology, vol. 21, no. 2, pp. 282–289, 2000. View at: Google Scholar
  7. B. Palkar and D. Mishra, “Fusion of multi-modal lumbar spine images using Kekre’s hybrid wavelet transform,” IET Image Processing, vol. 13, no. 12, pp. 2271–2280, 2019. View at: Publisher Site | Google Scholar
  8. D. Tomazevic and F. Pernus, “Robust gradient-based 3-D/2-D registration of C-T and MR to X-ray images,” IEEE Transactions on Medical Imaging, vol. 27, no. 12, pp. 1704–1714, 2008. View at: Google Scholar
  9. Y. Otake, A. S. Wang, J. W. Stayman et al., “Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation,” Physics in Medicine and Biology, vol. 58, no. 23, pp. 8535–8553, 2013. View at: Publisher Site | Google Scholar
  10. R. Harmouche, F. Cheriet, H. Labelle, and J. Dansereau, “3D registration of MR and X-ray spine images using an articulated model,” Computerized Medical Imaging and Graphics, vol. 36, no. 5, pp. 410–418, 2012. View at: Publisher Site | Google Scholar
  11. G. Hille, S. Saalfeld, S. Serowy, and K. Tonnies, “Multi-segmental spine image registration supporting image-guided interventions of spinal metastases,” Computers in Biology and Medicine, vol. 102, pp. 16–20, 2018. View at: Publisher Site | Google Scholar
  12. L. Chen, H. Zhang, S. Wu, S. Yu, and Y. Xie, “Estimating fetal brain motion with total-variation-based magnetic resonance image registration,” in World Congress on Intelligent Control and Automation (WCICA), pp. 809–813, Guilin, China, June 2016. View at: Publisher Site | Google Scholar
  13. A. Khalil, S. Ng, Y. M. Liew, and K. W. Lai, An Overview on Image Registration Techniques for Cardiac Diagnosis and Treatment, Cardiology Research and Practice, Hindawi, 2018.
  14. J. Li and Q. Ma, A Fast Subpixel Registration Algorithm Based on Single-Step DFT Combined with Phase Correlation Constraint in Multimodality Brain Image, Computational and Mathematical Methods in Medicine, Hindawi, 2020.
  15. P. Viola and W. Wells-III, “Alignment by maximization of mutual information,” International Journal of Computer Vision, vol. 24, no. 2, pp. 137–154, 1997. View at: Google Scholar
  16. W. M. Wells, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, “Multi-modal volume registration by maximization of mutual information,” Medical Image Analysis, vol. 1, no. 1, pp. 35–51, 1996. View at: Publisher Site | Google Scholar
  17. C. Studholme, D. L. G. Hill, and D. J. Hawkes, “An overlap invariant entropy measure of 3D medical image alignment,” Pattern Recognition, vol. 32, no. 1, pp. 71–86, 1999. View at: Publisher Site | Google Scholar
  18. C. Studholme, C. Drapaca, B. Iordanova, and V. Cardenas, “Deformation-based mapping of volume change from serial brain MRI in the presence of local tissue contrast change,” IEEE Transactions on Medical Imaging, vol. 25, no. 5, pp. 626–639, 2006. View at: Publisher Site | Google Scholar
  19. S. Klein, U. A. van der Heide, I. M. Lips, M. van Vulpen, M. Staring, and J. P. W. Pluim, “Automatic segmentation of the prostate in 3D MR images by atlas matching using localized mutual information,” Medical Physics, vol. 35, no. 4, pp. 1407–1417, 2008. View at: Publisher Site | Google Scholar
  20. D. Loeckx, P. Slagmolen, F. Maes, D. Vandermeulen, and P. Suetens, “Nonrigid image registration using conditional mutual information,” IEEE Transactions on Medical Imaging, vol. 29, no. 1, pp. 19–29, 2010. View at: Publisher Site | Google Scholar
  21. H. Rivaz, Z. Karimaghaloo, and D. L. Collins, “Self-similarity weighted mutual information: a new nonrigid image registration metric,” Medical Image Analysis, vol. 18, no. 2, pp. 343–358, 2014. View at: Publisher Site | Google Scholar
  22. X. Zhuang, S. Arridge, D. J. Hawkes, and S. Ourselin, “A nonrigid registration framework using spatially encoded mutual information and free-form deformations,” IEEE Transactions on Medical Imaging, vol. 30, no. 10, pp. 1819–1828, 2011. View at: Publisher Site | Google Scholar
  23. Y. Ou, A. Sotiras, N. Paragios, and C. Davatzikos, “Dramms: deformable registration via attribute matching and mutual-saliency weighting,” Medical Image Analysis, vol. 15, no. 4, pp. 622–639, 2011. View at: Publisher Site | Google Scholar
  24. R. Zhang, W. Zhou, Y. Li, S. Yu, and Y. Xie, “Nonrigid registration of lung CT images based on tissue features,” Computational and Mathematical Methods in Medicine, vol. 2013, Article ID 834192, 2013. View at: Publisher Site | Google Scholar
  25. M. P. Heinrich, M. Jenkinson, M. Bhushan et al., “MIND: modality independent neighbourhood descriptor for multi-modal deformable registration,” Medical Image Analysis, vol. 16, no. 7, pp. 1423–1435, 2012. View at: Publisher Site | Google Scholar
  26. J. A. Onofrey, L. H. Staib, and X. Papademetris, “Learning nonrigid deformations for constrained multi-modal image registration,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, Lecture Notes in Computer Science, pp. 171–178, 2013. View at: Google Scholar
  27. M. Toews, L. Zöllei, and W. M. Wells, “Feature-based alignment of volumetric multimodal images,” in Information Processing in Medical Imaging, Lecture Notes in Computer Science, pp. 25–36, 2013. View at: Publisher Site | Google Scholar
  28. F. Zhu, M. Ding, and X. Zhang, “Self-similarity inspired local descriptor for non-rigid multi-modal image registration,” Information Sciences, vol. 372, pp. 16–31, 2016. View at: Publisher Site | Google Scholar
  29. D. Jiang, Y. Shi, X. Chen, M. Wang, and Z. Song, “Fast and robust multimodal image registration using a local derivative pattern,” Medical Physics, vol. 44, no. 2, pp. 497–509, 2017. View at: Publisher Site | Google Scholar
  30. J. Woo, M. Stone, and J. L. Prince, “Multimodal registration via mutual information incorporating geometric and spatial context,” IEEE Transactions on Image Processing, vol. 24, no. 2, pp. 757–769, 2015. View at: Publisher Site | Google Scholar
  31. D. D. B. Carvalho, S. Klein, Z. Akkus et al., “Joint intensity-andpoint based registration of free-hand B-mode ultrasound and MRI of the carotid artery,” Medical Physics, vol. 41, no. 5, pp. 052904–052912, 2014. View at: Publisher Site | Google Scholar
  32. O. Weistrand and S. Svensson, “The ANACONDA algorithm for deformable image registration in radiotherapy,” Medical Physics, vol. 42, no. 1, pp. 40–53, 2015. View at: Publisher Site | Google Scholar
  33. D. Rueckert, L. Sonoda, C. Hayes, D. Hill, M. Leach, and D. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Transactions on Medical Imaging, vol. 18, no. 8, pp. 712–721, 1999. View at: Publisher Site | Google Scholar
  34. T. Rohlfing, C. Maurer, D. Bluemke, and M. Jacobs, “Volume-preserving nonrigid registration of MR breast images using free-form deformation with an incompressibility constraint,” IEEE Transactions on Medical Imaging, vol. 22, no. 6, pp. 730–741, 2003. View at: Publisher Site | Google Scholar
  35. M. Staring, S. Klein, and J. Pluim, “A rigidity penalty term for nonrigid registration,” Medical Physics, vol. 34, no. 11, pp. 4098–4108, 2007. View at: Publisher Site | Google Scholar
  36. D. Ruan, J. Fessler, M. Roberson, J. Balter, and M. Kessler, “Nonrigid registration using regularization that accommodates local tissue rigidity,” Computers in Biology and Medicine, vol. 42, no. 1, pp. 123–128, 2012. View at: Google Scholar
  37. C. Fiorino, E. Maggiulli, S. Broggi et al., “Introducing the jacobian-volume-histogram of deforming organs: application to parotid shrinkage evaluation,” Physics in Medicine and Biology, vol. 56, no. 11, pp. 3301–3312, 2011. View at: Publisher Site | Google Scholar
  38. S. Leibfarth, D. Monnich, S. Welz et al., “A strategy for multimodal deformable image registration to integrate PET/MR into radiotherapy treatment planning,” Acta Oncologica, vol. 52, no. 7, pp. 1353–1359, 2013. View at: Publisher Site | Google Scholar
  39. S. Klein, J. P. W. Pluim, M. Staring, and M. A. Viergever, “Adaptive stochastic gradient descent Optimisation for image registration,” International Journal of Computer Vision, vol. 81, no. 3, pp. 227–239, 2009. View at: Publisher Site | Google Scholar
  40. S. Klein, M. Staring, K. Murphy, M. A. Viergever, and J. Pluim, “Elastix: a toolbox for intensity-based medical image registration,” IEEE Transactions on Medical Imaging, vol. 29, no. 1, pp. 196–205, 2010. View at: Publisher Site | Google Scholar
  41. S. Reaungamornrat, T. D. Silva, A. Uneri et al., “MIND demons: symmetric diffeomorphic deformable registration of MR and CT for image-guided spine surgery,” IEEE Transactions on Medical Imaging, vol. 35, no. 11, pp. 2413–2424, 2016. View at: Publisher Site | Google Scholar
  42. S. Yu, S. Wu, L. Zhuang et al., “Efficient segmentation of a breast in B-mode ultrasound tomography using threedimensional GrabCut (GC3D),” Sensors, vol. 17, no. 8, p. 1827, 2017. View at: Publisher Site | Google Scholar
  43. T. Rohlfing, “Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable,” IEEE Transactions on Medical Imaging, vol. 31, no. 2, pp. 153–163, 2012. View at: Publisher Site | Google Scholar
  44. T. Veninga, H. Huisman, R. van der Maazen, and H. Huizenga, “Clinical validation of the normalized mutual information method for registration of CT and MR images in radiotherapy of brain tumors,” Journal of Applied Clinical Medical Physics, vol. 5, no. 3, pp. 66–79, 2004. View at: Publisher Site | Google Scholar
  45. J. Ceranka, S. Verga, M. Kvasnytsia et al., “Multi-atlas segmentation of the skeleton from whole-body MRI - impact of iterative background masking,” Magnetic Resonance in Medicine, vol. 83, no. 5, pp. 1851–1862, 2019. View at: Publisher Site | Google Scholar
  46. B. D. de Vos, F. F. Berendsen, M. A. Viergever, H. Sokooti, M. Staring, and I. Išgum, “A deep learning framework for unsupervised affine and deformable image registration,” Medical Image Analysis, vol. 52, pp. 128–143, 2019. View at: Publisher Site | Google Scholar
  47. R. Wang and M. M. Ward, “Arthritis of the spine,” in Spinal Imaging and Image Analysis, pp. 31–66, Springer, Cham, 2015. View at: Publisher Site | Google Scholar
  48. C. Wachinger and N. Navab, “Entropy and laplacian images: structural representations for multi-modal registration,” Medical Image Analysis, vol. 16, no. 1, pp. 1–17, 2012. View at: Publisher Site | Google Scholar
  49. W. Li, Y. Li, W. Qin et al., “Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy,” Quantitative Imaging in Medicine and Surgery, vol. 10, no. 6, pp. 1223–1236, 2020. View at: Publisher Site | Google Scholar
  50. G. Haskins, U. Kruger, and P. Yan, “Deep learning in medical image registration: a survey,” Machine Vision and Applications, vol. 31, no. 1-2, p. 8, 2020. View at: Publisher Site | Google Scholar
  51. A. Herline, J. Herring, J. Stefansic, W. Chapman, R. Galloway, and B. Dawant, “Surface registration for use in interactive, image-guided liver surgery,” Computer Aided Surgery, vol. 5, no. 1, pp. 11–17, 2000. View at: Publisher Site | Google Scholar
  52. J. Thirion, “Image matching as a diffusion process: an analogy with Maxwell's demons,” Medical Image Analysis, vol. 2, no. 3, pp. 243–260, 1998. View at: Publisher Site | Google Scholar
  53. T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache, “Diffeomorphic demons: Efficient non-parametric image registration,” NeuroImage, vol. 45, no. 1, pp. S61–S72, 2009. View at: Publisher Site | Google Scholar
  54. H. Lombaert, L. Grady, X. Pennec, and N. Ayache, “F. CherietSpectral logdemons: Diffeomorphic image registration with very large deformations,” International Journal of Computer Vision, vol. 107, no. 3, pp. 254–271, 2014. View at: Google Scholar

Copyright © 2020 Shibin Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views267
Downloads313
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.