Table of Contents Author Guidelines Submit a Manuscript
BioMed Research International
Volume 2017, Article ID 5380742, 11 pages
Research Article

A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

1Software College, Northeastern University, Shenyang 110819, China
2Department of Nuclear Medicine, General Hospital of Shenyang Military Area Command, Shenyang 110840, China

Correspondence should be addressed to Huiyan Jiang; nc.ude.uen.liam@gnaijyh

Received 21 August 2016; Revised 2 December 2016; Accepted 1 February 2017; Published 21 February 2017

Academic Editor: Gang Liu

Copyright © 2017 Zhiying Song et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one.