Journal of Healthcare Engineering

Journal of Healthcare Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5520196 | https://doi.org/10.1155/2021/5520196

Xiaodan Sui, Yuanjie Zheng, Yunlong He, Weikuan Jia, "Symmetric Deformable Registration via Learning a Pseudomean for MR Brain Images", Journal of Healthcare Engineering, vol. 2021, Article ID 5520196, 8 pages, 2021. https://doi.org/10.1155/2021/5520196

Symmetric Deformable Registration via Learning a Pseudomean for MR Brain Images

Academic Editor: Saverio Maietta
Received13 Jan 2021
Revised23 Mar 2021
Accepted08 Apr 2021
Published23 Apr 2021

Abstract

Image registration is a fundamental task in medical imaging analysis, which is commonly used during image-guided interventions and data fusion. In this paper, we present a deep learning architecture to symmetrically learn and predict the deformation field between a pair of images in an unsupervised fashion. To achieve this, we design a deep regression network to predict a deformation field that can be used to align the template-subject image pair. Specifically, instead of estimating the single deformation pathway to align the images, herein, we predict two halfway deformations, which can move the original template and subject into a pseudomean space simultaneously. Therefore, we train a symmetric registration network (S-Net) in this paper. By using a symmetric strategy, the registration can be more accurate and robust particularly on the images with large anatomical variations. Moreover, the smoothness of the deformation is also significantly improved. Experimental results have demonstrated that the trained model can directly predict the symmetric deformations on new image pairs from different databases, consistently producing accurate and robust registration results.

1. Introduction

Computer models have become a usable method for solving biomedical engineering and are applied to the analysis and measurement of data in the biomedical field (e.g., material mechanical behavior measurement [14], medical image segmentation [5, 6], and registration [79]). Deformable image registration aims to align subject images onto a template space by gradually optimizing the spatial transformation fields consisting of voxel-to-voxel correspondences between template and subject images [10]. Deformable registration is a key procedure in clinical applications such as population analysis, longitudinal data analysis, and image-guided intervention. Many image registration algorithms have been proposed and applied to various imaging analysis tasks [79, 1116]. Conventional registration algorithms achieve the task via typical optimization, which can be classified into either intensity-based registration [1113] or feature-based registration [1416]. In these methods, the deformation field is obtained by iteratively optimizing the image similarity metric with a smoothness regularization constraint.

In recent years, deep learning has been widely applied in medical image analysis [17, 18]. And deep-learning-based registration methods have shown promising performance especially for efficiency, as the computational time can be significantly reduced from minutes to seconds. Since the ground-truth deformations are difficult to obtain in practice, some semisupervised [19] and unsupervised learning strategies [2022] are more popular currently. Specifically, the spatial transformation network (STN) [23] is leveraged in the deep-learning-based registration framework so that the loss can be defined directly on the image similarity, instead of using ground-truth deformations as supervised information. When the model is well trained, in the application stage, the transformation field can be estimated for unseen image pairs, without the need for iterative optimization. Therefore, deep-learning-based registration is more flexible in real clinical use. Additionally, to further improve the registration accuracy, multiscale strategy [24, 25], diffeomorphic strategy [26], and inverse-consistent properties [27] are also incorporated in the deep-learning-based registration framework.

However, for the aforementioned registration algorithms, it is difficult to accurately register the images with large anatomical variation, and the smoothness is even difficult to preserve and constrain for large deformation. Thus, it is essential to develop an algorithm, which can effectively register the images with large anatomical variations and, meanwhile, keep the transformation field smooth, so that the topology can be well preserved. In addition, symmetric diffeomorphic registration has achieved better performance overall, which estimates symmetrical deformation pathway from two objects (template and subject) to the intermediate point instead of a single pathway from template to subject [13, 28]. Inspired by these methods, we hope to add a symmetric image registration strategy to the unsupervised model.

In this paper, we further investigate the deep-learning-based registration by considering the symmetric property. We propose a symmetric registration network (S-Net) by simultaneously aligning the subject and template images to an intermediate space, i.e., the pseudomean space. Specifically, instead of establishing the voxel-to-voxel correspondences in one pathway, i.e., from template space to subject space, we move the template and subject images symmetrically, until they meet in the pseudomean space. In this space, the image similarity is maximized. The main contribution of this work can be summarized as follows:(1)We propose a symmetric registration network that can register images in the dual direction simultaneously. In this framework, the pseudomean space can be automatically learned by using the symmetric constraint without any supervised guidance.(2)The symmetric property allows for estimating two short deformation pathways instead of directly estimating a long deformation pathway. It is more effective to register images with large anatomical variations. The final registration result can be more accurate and smoother.(3)Under the symmetric framework, we can directly obtain the forward (register subject to template) and backward (register subject to template) transformation fields by using the trained S-Net. Therefore, the inverse consistency can be achieved without introducing any additional model or strategy.

2. Materials and Methods

The S-Net is trained in an unsupervised manner based on the proposed symmetric way. As shown in Figure 1, the input of the network is a pair of template image and subject image , together with their difference map. Instead of directly estimating the deformation field to register the subject to template, we make the training of the registration network symmetric, i.e., the template and subject image both deform until reaching their pseudomean space. Two deformation pathways will be estimated under this framework: (1) is the deformation pathway between template and pseudomean space and (2) is the deformation pathway between subject and pseudomean space.

Mathematically, the optimization of symmetric registration can be formulated by minimizing the image dissimilarity in the pseudomean space:where is a deformation operation that can warp by and is the dissimilarity between the deformed subject image and deformed template image . is a regularization term to constrain the smoothness of two symmetric deformations and . is a weight to balance the registration accuracy and deformation smoothness. In the training of the deformable registration network S-Net, and are used to define the loss function and is the spatial transformation network [23] used to spatially transform the image based on the estimated transformation field. The details of training the symmetric registration network S-Net will be described in Section 2.1.

In the testing stage, giving an unseen image pair and their difference maps, we can get their symmetric deformations . As shown in Figure 2, the final symmetric registration results can be obtained by composing the two predicted deformation pathways: the forward deformation can be formulated as , which can register the subject to template. The backward deformation can be formulated as which can register the template to the subject. is the inverse field of , and “” denotes the composition operator [28].

2.1. Symmetric Network Design

For symmetric registration, the pseudomean is an intermediate space on the image manifold, and the distance between the pseudomean and the template should equal that between the pseudomean and the subject. Therefore, for each location/voxel, the deformation magnitudes of and should be equal to each other, while the direction should be opposite. Thus, during the training, the output of the network is only , and we can set . There are two advantages to using this symmetric setting. (1) The large local deformation can be more effectively estimated since we shorten the deformation pathway during registration. (2) We can easily keep the inverse consistency without introducing any additional constraint.

The S-Net was designed based on the network architecture designed in VoxelMorph [20], which is lighter than the original U-Net [6] by reducing the redundant connections to adapt to the analysis of 3D images. The network output is the halfway deformation . Since we do not have the ground-truth deformations, herein, we apply the unsupervised training strategy [2022]. Specifically, a spatial transformer network [23] provides a fully differentiable spatial transformation layer that can transform the input image by the output deformation , which is the output of the S-Net. Specifically, we use the trilinear interpolation in STN, and the operation can be formulated aswhere is the voxel coordinate, is the eight neighbor voxels of in , and indicates three directions in 3D space. With STN, the loss defined by the image similarity can backpropagate to the S-Net, and the registration network can be trained in an unsupervised manner.

2.2. Loss Definition
2.2.1. Symmetric Similarity Loss

The similarity loss of the registration task is used to evaluate the registration accuracy, and here, we define the similarity loss by SSD. Conventionally, the subject image should be warped to the template space by the output deformation field, and the loss is calculated in the template space. For the symmetric registration network, we define the similarity loss in the pseudomean space to penalize the symmetric property. Mathematically, it can be formulated as

By minimizing the symmetric similarity loss , the template and subject image will gradually register with each other, until they reach their pseudomean space. To further enhance the symmetric constraints and registration accuracy, we also define the similarity loss in both template and subject image space:

where indicates the forward deformation pathway, which can transform the subject image to the template space, while indicates the backward deformation pathway, which can transform the template image to the subject space. It worth noting that the output of the S-Net is the halfway deformation , and the symmetric loss defined in the pseudomean space can well preserve the symmetric property, while and defined in the end image space can make the registration accurate. Therefore, the whole symmetric similarity loss can be summarized as .

2.2.2. Field Regularization Loss

The regularization loss is used to constrain the smoothness of the estimated deformation field , which is important to preserve the topology. In S-Net, this regularization loss is only defined on (output of the network). The smoothness of can be automatically constrained since . In our work, three kinds of regularization loss, i.e., Laplace smoothness, zero constraint, and antifolds constraint, are used to penalize the smoothness.

(1) Laplace smoothness : constraining the smoothness of the field , which is defined as

where is the second derivative of the field at the voxel .

(2) Zero constraint: modifying the displacement value for avoiding unreasonable large deformations:

(3) Antifolds constraint: adding an antifolds constraint [27] in the loss function to further enhance the smoothness constraint, avoiding folds, or crossing in the final deformation:where is the gradient of the displacement map and the term is an index function to penalize the gradient of the deformation field with folds. If , , and , for otherwise.

The final loss function for training the S-Net is

where , , and are used to balance the weight for each term. In this work, we set and in our experiment. For zero constraint term , we set the weight a small value as , since the large value may influence the accuracy when estimating the large deformations.

2.3. Implementation and Training

The S-Net is implemented in Keras and trained on an NVIDIA Tesla V100 GPUs with 32 GB of video memory. The network is trained by using the Adam strategy [29]. We use four public databases, i.e., LONI LPBA40 [30], IBSR18 (https://www.nitrc.org/projects/ibsr), CUMC12 [31], and MGH10 [32] in our experiments. All images were preprocessed by using a standard pipeline, including skull stripping, resampling, and affine registration to the MNI152 template [33] by using FLIRT [34]. After preprocessing, the data are with the same size (voxel size ).

We used 30 subjects from LONI LPBA40 dataset as the training data, and image pairs can be derived. The remaining 10 images were used as the testing data, where image pairs can be derived. The other three datasets are also used as the testing data to further evaluate the effectiveness of the proposed method, and we have image pairs from IBSR18, image pairs from CUMC12, and image pairs from MGH10. For more effective training, we trained S-Net in two stages. First, the network was pretrained with a small dataset, where we chose one image as template and all the remaining images as subject. In this scenario, we totally have image pairs for training. The network was trained for 200 iterations per image pair at a learning rate of . Then, we draw each two images as a template and subject pair, and we totally have image pairs for further training the S-Net. 20 epochs were trained in this scenario, the learning rate is set to , with a decay weight of 0.5 for every 2 epochs.

3. Results

We have compared our results with three state-of-the-art registration methods, namely, D. Demons [12], SyN [13], and VoxelMorph [20]. Demons and SyN are typical deformable registration methods enforced successfully for the medical image registration task, and VoxelMorph is a learning-based framework that defines registration as a learnable parametric function. We conducted the experiment and measured the registration accuracy based on the volumetric overlap of brain ROIs. The overall registration accuracy was computed in the form of a Dice Similarity Coefficient (DSC) score , for each ROI, with and being the corresponding anatomical regions in the subject and template image. Additionally, we also evaluate the smoothness of the transformation map by using the Jacobian determinant . Transformation map is considered smoothness when , where [35]. And, the overall folds of the estimated displacement map are defined in .

The results of DSC scores and runtimes are shown in Table 1 compared with those state-of-the-art registration methods (Demons, SyN, and VoxelMorph). The results show that the proposed method performs significantly better than VoxelMorph (learning-based method without using a symmetric training manner). For some datasets, our approach even outperforms SyN, which was among the state-of-the-art brain image registration algorithms and only took about 3.6 seconds to register two brain volume data efficiently. Those learning-based methods, compared with the regular scenario, have shorter runtime, and also performance hardly deteriorates. In Table 2, we present the folds in the estimated displacement maps of the proposed method and the baseline method. The results show that the displacement maps are estimated by the proposed symmetric registration network smoothness more than by the model without asymmetric strategy in most cases by a large margin.


DatasetD. DemonsSyN (CC)VoxelMorph (CC)VoxelMorph (MSE)Proposed method

LPBA4068.7  2.471.31.871.2  2.871.6  2.471.82.1
IBSR1854.6  2.257.42.454.2  3.455.2  2.956.82.5
CUMC1253.1  3.454.12.851.8  4.153.1  3.554.43.2
MGH1060.4  2.562.12.459.6  2.960.2  2.662.42.4
Time (s)11413300.310.313.6


DatasetD. DemonsSyN (CC)VoxelMorph (CC)VoxelMorph (MSE)Proposed method

LPBA4013.71  2.91028.52  14.9244.04  13.833.28 0.78
IBSR1815.59  8.14044.26  15.3167.57  19.597.561.86
CUMC1221.02  9.38039.37  11.6548.92  15.287.291.47
MGH1018.92  6.54042.17  13.2656.72  16.766.631.53

The respective results and intermediate results are also shown in Figures 3(b)3(e) (final warped template image, middle warped subject image, middle warped template image, and final warped subject image, respectively). The S-Net works better than directly registering images in a single pathway: not only the registration accuracy but also the smoothness is also largely improved. This indicates that the proposed symmetric training strategy can effectively estimate large local deformations and the estimated field is smoother.

It is worth noting that S-Net achieves image registration tasks in an unsupervised end-to-end fashion by using an image similarity metric for optimization so that the training of this S-Net does not require the known deformation field, which is difficult to obtain for medical image registration. Furthermore, we have also evaluated our framework for the number of folds with the traditional registration method and single-direction deep-learning-based registration method. The deformation maps estimated by the proposed S-Net tend to be smoother, since the symmetric displacement map only needs half pathway, instead of a long pathway, which is easier to penalize the smoothness. Experimental results showed that our method successfully reduces the folds of estimated maps while providing more accurate registration results.

4. Discussion

S-Net learns for image registration tasks in an unsupervised end-to-end fashion using an image similarity metric for optimization so that the training for this S-Net does not require the known deformation field, which is difficult to obtain for medical image registration. Furthermore, we have also evaluated our framework for the number of folds with the traditional registration methods and single-direction deep-learning-based registration methods. The deformation maps estimated by the proposed S-NET tend to be smoother, since the symmetric displacement map only needs a half pathway, instead of a long pathway, which is easier to penalize the smoothness. Experimental results showed that our method successfully reduces the folds of estimated maps while providing more accurate registration results.

The total loss function in S-NET consists of two types of six losses. However, the multiple losses weight (hyperparameters) of our S-NET training is hard to balance. Therefore, we did some experiments to determine the weight of multiple losses in Figure 4. We set and that can achieve good performance, and after , it has little effect on the results. In our experiment, we set , , and . It is difficult to balance multiple losses is a common problem in deep-learning-based registration methods. In future work, we hope that we can learn hyperparameters through learning.

5. Conclusion

We presented a new symmetric training strategy for an unsupervised deep-learning-based registration framework, which can better estimate the large local deformation during registration. In particular, we utilize a pseudomean as an intermediate target registration space, and a long deformation pathway can be divided into two short deformation pathways. Experimental results have shown promising registration performance for both accuracy and field smoothness.

Data Availability

The databases of LPBA40, IBSR18, CUMC12, and MGH10 can be downloaded from the registration grant challenge at https://continuousregistration.grand-challenge.org.

Conflicts of Interest

The authors declare that they have no conflicts of interest to report regarding the present study.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grants nos. 81871508 and 61773246), the Major Program of Shandong Province Natural Science Foundation (Grant nos. ZR2019ZD04 and ZR2018ZB0419), and the Taishan Scholar Program of Shandong Province of China (Grant no. TSHW201502038).

References

  1. P. Ausiello, S. Ciaramella, F. Garcia-Godoy et al., “The effects of cavity-margin-angles and bolus stiffness on the mechanical behavior of indirect resin composite class II restorations,” Dental Materials, vol. 33, no. 1, pp. e39–e47, 2017. View at: Publisher Site | Google Scholar
  2. P. Ausiello, S. Ciaramella, A. Fabianelli et al., “Mechanical behavior of bulk direct composite versus block composite and lithium disilicate indirect class II restorations by CAD-FEM modeling,” Dental Materials, vol. 33, no. 6, pp. 690–701, 2017. View at: Publisher Site | Google Scholar
  3. R. De Santis, A. Gloria, S. Viglione et al., “3D laser scanning in conjunction with surface texturing to evaluate shift and reduction of the tibiofemoral contact area after meniscectomy,” Journal of the Mechanical Behavior of Biomedical Materials, vol. 88, pp. 41–47, 2018. View at: Publisher Site | Google Scholar
  4. P. Fucile, I. Papallo, G. Improta et al., “Reverse engineering and additive manufacturing towards the design of 3D advanced scaffolds for hard tissue regeneration,” in Proceedings of the 2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4. 0&IoT), pp. 33–37, Naples, Italy, June 2019. View at: Google Scholar
  5. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, 2015. View at: Publisher Site | Google Scholar
  6. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015, vol. 9351, pp. 234–241, Munich, Germany, October 2015. View at: Publisher Site | Google Scholar
  7. A. Sotiras, C. Davatzikos, and N. Paragios, “Deformable medical image registration: a survey,” IEEE Transactions on Medical Imaging, vol. 32, no. 7, pp. 1153–1190, 2013. View at: Publisher Site | Google Scholar
  8. R. Bajcsy and S. Kovačič, “Multiresolution elastic matching,” Computer Vision, Graphics, and Image Processing, vol. 46, no. 1, pp. 1–21, 1989. View at: Publisher Site | Google Scholar
  9. J.-P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell’s demons,” Medical Image Analysis, vol. 2, no. 3, pp. 243–260, 1998. View at: Publisher Site | Google Scholar
  10. D. L. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes, “Medical image registration,” Physics in Medicine & Biology, vol. 46, no. 3, pp. R1–R45, 2001. View at: Publisher Site | Google Scholar
  11. B. Avants, C. Anderson, M. Grossman, and J. C. Gee, “Spatiotemporal normalization for longitudinal analysis of gray matter atrophy in frontotemporal dementia,” Medical Image Computing and Computer-Assisted Intervention-MICCAI 2007, vol. 10, pp. 303–310, 2007. View at: Publisher Site | Google Scholar
  12. T. Vercauteren, X. Pennec, A. Perchant, and N. Ayache, “Diffeomorphic demons: efficient non-parametric image registration,” NeuroImage, vol. 45, no. 1, pp. 61–72, 2009. View at: Publisher Site | Google Scholar
  13. B. B. Avants, C. L. Epstein, M. Grossman, and J. C. Gee, “Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain,” Medical Image Analysis, vol. 12, no. 1, pp. 26–41, 2008. View at: Publisher Site | Google Scholar
  14. D. Shen and C. Davatzikos, “Hammer: hierarchical attribute matching mechanism for elastic registration,” IEEE Transactions on Medical Imaging, vol. 21, no. 11, pp. 1421–1439, 2002. View at: Publisher Site | Google Scholar
  15. M. F. Beg, M. I. Miller, A. Trouvé, and L. Younes, “Computing large deformation metric mappings via geodesic flows of diffeomorphisms,” International Journal of Computer Vision, vol. 61, no. 2, pp. 139–157, 2005. View at: Publisher Site | Google Scholar
  16. Z. Fanti, F. Torres, E. Hazan-Lasri et al., “Improved surface-based registration of CT and intraoperative 3D ultrasound of bones,” Journal of Healthcare Engineering, vol. 2018, Article ID 2365178, 11 pages, 2018. View at: Publisher Site | Google Scholar
  17. D. Shen, G. Wu, and S. Heung-Il, “Deep learning in medical image analysis,” Annual Review of Biomedical Engineering, vol. 19, no. 1, pp. 221–248, 2017. View at: Publisher Site | Google Scholar
  18. J. Wang, H. Zhu, S. H. Wang, and Y. D. Zhang, “A review of deep learning on medical image analysis,” Mobile Networks and Applications, vol. 26, pp. 1–30, 2020. View at: Publisher Site | Google Scholar
  19. J. Fan, X. Cao, P.-T. Yap, and D. Shen, “BIRNet: brain image registration using dual-supervised fully convolutional networks,” Medical Image Analysis, vol. 54, pp. 193–206, 2019. View at: Publisher Site | Google Scholar
  20. G. Balakrishnan, A. Zhao, M. R. Sabuncu, A. V. Dalca, and J. Guttag, “An unsupervised learning model for deformable medical image registration,” in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9252–9260, Salt Lake City, UT, USA, June 2018. View at: Publisher Site | Google Scholar
  21. B. D. De Vos, F. F. Berendsen, M. A. Viergever, M. Staring, and I. Išgum, “End-to-end unsupervised deformable image registration with a convolutional neural network,” in Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support DLMIA 2017, ML-CDS 2017, pp. 204–212, Québec City, Canada, September 2017. View at: Publisher Site | Google Scholar
  22. H. Li and Y. Fan, “Non-rigid image registration using self-supervised fully convolutional networks without training data,” in Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1075–1078, Washington, DC, USA, April 2018. View at: Publisher Site | Google Scholar
  23. M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavucuoglu, “Spatial transformer networks,” NeurIPS Proceedings, vol. 2, pp. 2017–2025, 2015. View at: Google Scholar
  24. A. V. Dalca, G. Balakrishnan, J. Guttag, and M. R. Sabuncu, “Unsupervised learning of probabilistic diffeomorphic registration for images and surfaces,” Medical Image Analysis, vol. 57, pp. 226–236, 2019. View at: Publisher Site | Google Scholar
  25. Z. Shen, X. Han, Z. Xu, and M. Niethammer, “Networks for joint affine and non-parametric image registration,” in Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4224–4233, Long Beach, CA, USA, June 2019. View at: Google Scholar
  26. T. C. Mok and A. Chung, “Fast symmetric diffeomorphic image registration with convolutional neural networks,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4644–4653, IEEE, Seattle, WA, USA, June 2020. View at: Google Scholar
  27. J. Zhang, “Inverse-consistent deep networks for unsupervised deformable image registration,” 2018, http://arxiv.org/abs/1809.03443. View at: Google Scholar
  28. G. Wu, M. Kim, Q. Wang, and D. Shen, “S-HAMMER: hierarchical attribute-guided, symmetric diffeomorphic registration for MR brain images,” Human Brain Mapping, vol. 35, no. 3, pp. 1044–1060, 2014. View at: Publisher Site | Google Scholar
  29. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” 2014, http://arxiv.org/abs/1412.6980. View at: Google Scholar
  30. D. W. Shattuck, M. Mirza, V. Adisetiyo et al., “Construction of a 3D probabilistic atlas of human cortical structures,” NeuroImage, vol. 39, no. 3, pp. 1064–1080, 2008. View at: Publisher Site | Google Scholar
  31. V. S. Caviness Jr., J. Meyer, N. Makris, and D. N. Kennedy, “MRI-based topographic parcellation of human neocortex: an anatomically specified method with estimate of reliability,” Journal of Cognitive Neuroscience, vol. 8, no. 6, pp. 566–587, 1996. View at: Publisher Site | Google Scholar
  32. A. Klein, J. Andersson, B. A. Ardekani et al., “Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration,” NeuroImage, vol. 46, no. 3, pp. 786–802, 2009. View at: Publisher Site | Google Scholar
  33. A.C. Evans, D. L. Collins, and B. MacDonald, “An mri-based stereotactic brain atlas from 300 young normal subjects,” Magnetic Resonance Scanning and Epilepsy, vol. 264, p. 408, 1992. View at: Publisher Site | Google Scholar
  34. M. Jenkinson and S. Smith, “A global optimisation method for robust affine registration of brain images,” Medical Image Analysis, vol. 5, no. 2, pp. 143–156, 2001. View at: Publisher Site | Google Scholar
  35. J. Kybic, P. Thevenaz, A. Nirkko, and M. Unser, “Unwarping of unidirectionally distorted EPI images,” IEEE Transactions on Medical Imaging, vol. 19, no. 2, pp. 80–93, 2000. View at: Publisher Site | Google Scholar

Copyright © 2021 Xiaodan Sui et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views236
Downloads325
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.