- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Computational and Mathematical Methods in Medicine

Volume 2013 (2013), Article ID 760903, 8 pages

http://dx.doi.org/10.1155/2013/760903

## Improving the Convergence Rate in Affine Registration of PET and SPECT Brain Images Using Histogram Equalization

Department of Signal Theory Networking and Communication, University of Granada, ETSIIT, 18071 Granada, Spain

Received 15 February 2013; Accepted 12 April 2013

Academic Editor: Anke Meyer-Baese

Copyright © 2013 D. Salas-Gonzalez et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

#### 1. Introduction

Emission computed tomography (ECT) has been widely employed in biomedical research and clinical medicine during the last three decades. ECT differs fundamentally from many other medical imaging modalities in that it produces a mapping of physiological functions as opposed to imaging anatomical structure. Tomographic radiopharmaceutical imaging, or ECT, provides in vivo three-dimensional maps of a pharmaceutical labeled with a gamma ray emitting radionuclide. The distribution of radionuclide concentrations isestimated from a set of projectional images acquired at many different angles around the patient. In this work, two different image modalities will be used: positron emission tomography (PET) and single photon emission computed tomography (SPECT).

Positron emission tomography (PET) is noninvasive, nuclear medicine imaging technique which produces a three-dimensional image of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide (tracer), which is introduced into the body on a biologically active molecule. When the tracer is ^{18}F-Fluorodeoxyglucose (F-FDG), its concentration gives us information about tissue metabolic activity, measuring the brain’s rate of glucose metabolism. Images of tracer concentration in 3-dimensional space within the brain are then reconstructed by computer analysis.

On the other hand, SPECT is a widely used technique to study the functional properties of the brain. SPECT brain imaging techniques employ radioisotopes which decay emitting predominantly a single gamma photon. When the nucleus of a radioisotope disintegrates, a gamma photon is emitted with a random direction which is uniformly distributed in the sphere surrounding the nucleus. If the photon is unimpeded by a collision with electrons or other particles within the body, its trajectory will be a straight line or “ray.” In order for a photon detector external to the patient to discriminate the direction that a ray came from, a physical collimation is required. Typically, lead collimator plates are placed prior to the detector’s crystal in such a manner that the photons coming from all but a single direction are blocked by the plates. This guarantees that only photons coming from the desired direction will strike the photon detector.

After image acquisition, when filtering and reconstruction are done, some additional preprocessing steps are needed before using functional brain images for computer aided diagnosis systems. The differences between brains of different subjects require the normalization of the images with respect to a reference template [1]. Image normalization allows to perform voxel-to-voxel comparison between same regions of different images [2, 3]. The general affine model with 12 parameters is usually chosen as a first normalization procedure before to proceed with a more complex nonrigid spatial transformation model [4–7]. Affine registration of tomography brain images is a very important task in biomedical image analysis [8–10].

The methods and approaches presented in this work have been motivated in the context of fusion of PET/MRI images. While the authors were developing a software for fusion of brain images, they were working under some hypothesis to render practical the requirements this software would then be used for: (i) using only affine registration because other nonlinear registration methods could possibly produce undesirable warping effects due to the intrasubject, intermodality nature of the registration problem studied in this work; (ii) providing reliability and robustness and increasing the convergence rate of the algorithms as far as possible; (iii) using the same anatomical brain image as a reference template for each image. This latter point was motivated by practical purposes as the registration method proposed here was developed in the context of a real application of brain image fusion of PET/MRI. Furthermore, the proposed methodology has been also tested in fusion of SPECT/MRI images. Therefore, the goal is to register the functional brain images (PET or SPECT) to the anatomical Magnetic Resonance Imaging (MRI).

In this work, a histogram equalization of the original tomography brain image is performed. We enhance the contrast of images by transforming the intensity values in the image, so that the histogram of the output image approximately matches the histogram of the reference template. Experimental results using positron emission tomography and single photon emission computed tomography brain images show that the preprocessing of these images using histogram equalization improves the convergence rate of the affine registration algorithm.

This paper is organised as follows. Section 2 introduces the SPECT and PET database used in this work; it also states the image registration problem expressed by a matrix multiplication and the Gauss-Newton optimization algorithm used to estimate the affine parameters. This section also presents the preprocessing step of the intensity values using histogram matching. In Section 3, the results are discussed. Lastly, in Section 4, the conclusions are given.

#### 2. Materials and Methods

##### 2.1. SPECT Database

The SPECT database is from a current study of the Alzheimer’s disease performed by the “Virgen de las Nieves” Hospital in Granada (Spain). In this work, we choose 50 images labelled by experts as Normal Controls, although the results presented here do not change if brain images labelled as late Alzheimer’s disease subjects are chosen.

The patients were injected with a gamma emitting ^{99m}Tc-ECD radiopharmaceutical, and the SPECT raw data was acquired by a three-head gamma camera. A total of 180 projections were taken for each patient with a 2-degree angular resolution. The images of the brain cross-sections were reconstructed from the projection data using the filtered backprojection (FBP) algorithm in combination with a Butterworth noise removal filter [11–14].

##### 2.2. PET Database

PET images used in the preparation of this work were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://www.loni.ucla.edu/ADNI). The ADNI was launched in 2003 by the National Institute on Aging (NIA), the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the Food and Drug Administration (FDA), private pharmaceutical companies, and nonprofit organizations, as a 60-million, 5-year public-private partnership. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). Determination of sensitive and specific markers of very early AD progression is intended to aid researchers and clinicians to develop new treatments and monitor their effectiveness, as well as lessen the time and cost of clinical trials.

The Principle Investigator of this initiative is Michael W. Weiner, MD degree, VA Medical Center and University of California, San Francisco. ADNI is the result of efforts of many coinvestigators from a broad range of academic institutions and private corporations, and subjects have been recruited from over 50 sites across the US and Canada. The initial goal of ADNI was to recruit 800 adults, ages 55 to 90, to participate in the research—approximately 200 cognitively normal older individuals to be followed for 3 years, 400 people with MCI to be followed for 3 years, and 200 people with early AD to be followed for 2 years.

FDG-PET scans were acquired according to a standardized protocol. A 30-minute dynamic emission scan, consisting of 6 5-minute frames, was acquired starting from 30 minutes and after the intravenous injection of 5.0–0.5 mCi of ^{18}F-FDG, as the subjects, who were instructed to fast for at least 4 h prior to the scan and lay quietly in a dimly lit room with their eyes open and minimal sensory stimulation. Data were corrected for radiation-attenuation and scatter using transmission scans from Ge-68 rotating rod sources and reconstructed using measured attenuation correction and image reconstruction algorithms specified for each scanner (http://www.loni.ucla.edu/ADNI/Data/ADNI_Data.shtml). Following the scan, each image was reviewed for possible artifacts at the University of Michigan, and all raw and processed study data was archived.

##### 2.3. Brain Affine Registration

An affine transformation maps from position in one image to in another via matrix , where the 12 matrix are the unknown parameters. For each voxel in an image, the affine transformation into the coordinates is expressed by a matrix multiplication , where

The goal of the affine image registration is to find the 12 components of the matrix describing a transformation that best matches both images (the source and the template) together.

The affine transformation is parametrized by 12 parameters. , , model zooms of the original image. These parameters allow us to scale the medical image. For example, if , the transformed image is greater than the original one in the -axis. A zoom in the image is desirable when the image and template do not have the same size. Translations in -, -, or -axis are parametrized by the component , , or , respectively, and provide a manner to center the image and the template. Shears are modeled by the nondiagonal elements of the matrix (, , , , , and ). Figure 1 shows the effects produced by different elementary 2-D affine transformations in a transaxial 2D slice MRI image. Left image is a random 2-D transaxial slice. Figures 1(a), 1(b), and 1(c) show the effects produced by changes in the diagonal elements in the 2-D affine matrix: balanced diagonal elements result in global rescaling (Figure 1(a)); unbalanced diagonal elements produce anisotropic rescaling along the -axis (Figure 1(b)) or the -axis (Figure 1(c)). Some specific relations between off-diagonal elements in the matrix produce rotations (Figure 1(d)), and, lastly, Figure 1(e) depicts the effects of an unbalance change in the off-diagonal elements resulting in shears along one of the two axes.

Before the application of an affine transformation to the original functional image, it is convenient to smooth the source image to improve accuracy [15]. This step decreases the number of potential local minima in the optimization task. On the other hand, the intensity values in the original image are referred to the center of the voxels. After the application of an affine transformation to an image, the centers of the voxels of the image rarely are placed in the centers of the voxels in the transformed image. Therefore, interpolation is needed in order to estimate the intensity value in the center of the voxels for the transformed image [16, 17].

##### 2.4. Gauss-Newton Optimization Algorithm

The parameters can be estimated by minimizing given cost function. The cost function (CF) is chosen as the mean squared difference between the images as follows: where denotes the source image and the template. In this work, the Gauss-Newton algorithm is used to estimate the matrix components in by finding the minimum of the cost function CF.

The Gauss-Newton algorithm (GN) can be viewed as a modification of Newton’s method with line search [18]. It is an iterative procedure which allows to find the minimum of a sum of squares. Starting with an initial guess , at each iteration, the value of is updated using the following rule: with satisfying the equation where is the vector of functions and is the Jacobian matrix of with respect to evaluated at . This optimization method is like Newton’s method with line search, but in that case, the Hessian is approximated using This choice avoids the computation of the individual Hessians of the residuals which sometimes can be difficult to compute.

##### 2.5. Histogram Equalization

Histogram matching is a procedure where a series of histogram-equalization steps is used to obtain an image with a histogram, that is, close to a prespecified histogram.

Suppose that the desired or specified normalized histogram is , with the desired image being represented as , having the normalized gray levels . Now, the given image with the PDF may be histogram-equalized by the transformation We may also derive a histogram-equalizing transform for the desired (but as yet unavailable) image as Observe that, in order to derive a histogram-equalizing transform, we need only the PDF of the image; the image itself is not needed.

When you supply a desired histogram, mathematically histogram equalization consists of choosing the grayscale transformation to minimize where is the cumulative histogram of the reference image (the desired image represented as ), and is the cumulative sum of the image for all intensities . This minimization is subject to the constraints that must be monotonic and cannot overshoot by more than half the distance between the histogram counts at a given intensity value . Then, the transformation will be used to map the gray levels in the image (or the colormap) to their new values.

Figure 2 depicts four different brain PET images and their histograms. Figure 2(a) is a reference PET image. Figure 2(b) shows a random transaxial 2-D PET image as it is obtained after reconstruction. Figure 2(c) shows a version of the same source PET image but with a dynamic range expansion to the same interval as the reference image ([0* *1]). Figure 2(d) shows the source image after the application of a histogram matching procedure using the histogram of the reference image in Figure 2(a) as a prespecified histogram. Let us see that in that case, both images (in Figures 2(a) and 2(d)) and their histograms are very similar.

##### 2.6. Summary

The procedure we follow to preprocess the functional brain images before to proceed with an affine registration to a template is summarized in this subsection. (i)Firstly, we apply a mask in the source images and we consider only those voxels with intensity values greater than a given threshold. This step is done to discard those voxels in the image outside the brain.(ii)Secondly, we calculate the histogram of the template image.(iii)Then, we perform histogram matching and we adjust the intensity values in the source images to the intensity of the reference template.(iv)Lastly, we build a cost function (using the mean squared difference between source image and reference template) and we estimate the 12 parameters of the affine matrix using a Gauss-Newton optimization procedure.

#### 3. Results and Discussion

The proposed methodology has been tested for 50 different SPECT and PET images. The images have been spatially normalized to a common template using a general affine model with 12 parameters.

In order to test the performance of the proposed method, we estimate the 12 components of the affine matrix after adjustment of the intensity of the images using histogram matching. We also estimate the general affine transformation matrix for the original images with intensity values expanded from 0 to 1, which is the same range as the template image. Therefore, this dataset is linearly normalized in intensity to the maximum. We compare the results for these two datasets by means of a plot of the mean normalized cost function versus the number of iterations. The mean normalized cost function for the transformed images is plotted in red using dots in Figures 3 and 5. Let us note that this result cannot be compared directly with the performance of the affine registration optimization method using the original images as the cost function (2) is intensity dependent. In order to perform a fair comparison, the affine matrix calculated in each iteration using the enhanced images is applied to the original brain image. Then, the cost function is calculated and plotted in blue squares (labelled as “histogram equalization”). The latter can be directly compared to the values of the normalized cost function in each iteration when the original images are used (labelled as “original” and plotted with black circles in Figures 3 and 5.

Figure 3 shows the mean normalised cost function value versus the number of iterations when the proposed methodology was tested using the PET database. The error bars are calculated using the standard deviation. In that case, preprocessing the images using histogram matching allows to improve the convergence rate of the algorithm. Specifically, images preprocessed using histogram equalization reach in the sixth iteration the same mean normalized value of the cost function as the original images in the ninth iteration which means a significant reduction in the convergence rate.

Figure 4(a) shows four transaxial cuts of the mean PET image after affine normalization used in this work and its corresponding histogram of intensity values. Figure 4(b) shows the reference template which is used in this work, and Figure 4(c) depicts four transaxial cuts of the mean PET image after affine normalization for the 50 PET images which were preprocessed using histogram matching. In that case, both the transaxial images and the histogram show that the intensity values of the images are similar to the intensity values of the image template in Figure 4(b).

Figure 5 shows the normalized mean cost function versus the number of iterations for 50 original SPECT brain images (denoted by circles) and the same images preprocessed using a histogram matching procedure to the reference template (with squares). The error bars are calculated using the standard deviation. Analogously to the previous case, when PET images were considered, this plot has been calculated applying the affine matrix estimated in each iteration using the preprocessed images to the original brain SPECT images. Furthermore, as in Figure 3, the red dotted line is the mean normalised cost function for the preprocessed images using histogram matching. The figure shows that preprocessing the images leads to an improvement of the convergence rate. Specifically, the algorithm reaches in only four iterations a normalised cost function value very similar to the value which is obtained using the original SPECT images in the eighth iteration. When the number of iterations is greater than 8, enhanced images obtain a mean value of the normalized cost function slightly greater than using the original images. Nevertheless, the proposed method is also useful for practical purposes in that case. For instance, histogram adjustment of original images can be used to estimate the affine transformation matrix up to the fourth iteration, and then this estimate can be used as initial guess in the Gauss-Newton optimization algorithm to calculate the 12 affine parameters using the original images.

Figure 6(a) depicts four transaxial brain images showing the mean after affine registration of 50 SPECT images considered in this work. The same is shown for the reference template, and let us note that Figures 6(b) and 4(b) are the same, as the same reference brain template image was used for the affine registration of the SPECT and PET images. Figure 6(c) depicts four transaxial brain cuts with transformed intensity values using histogram matching. Analogously to the case in which PET images were considered, Figure 6(c) shows that after histogram transformation, both the images and the histogram of intensity values are quite similar to the template in Figure 6(b).

In this work, histogram matching is used as a useful tool which is able to increase the convergence rate of affine transformation methods widely used in the literature. Histogram equalization is a common procedure which is usually used to enhance contrast in image processing. We have shown that this approach can be used in the context of affine registration of brain images, specifically when the intensity values of the reference template differ from the source image which is the case in intermodality image fusion of brain images. Specifically, in this work, the convergence rate of the affine registration using Gauss-Newton optimization algorithm has been compared using 50 PET and 50 SPECT images. When the images were preprocessed using histogram matching (using the histogram of the template image as prespecified desired histogram), the convergence rate of the algorithm has been improved.

#### 4. Conclusion

In this work, a procedure to improve the convergence rate in the context of affine registration of tomography brain images has been presented. The proposed methodology is based on a histogram matching; the intensity values in the original image are preadjusted using the information of the histogram of the reference template. This intensity normalization has been tested on different imaging modalities: single photon emission computed tomography and positron emission tomography. This approach has been shown to improve the convergence rate of the affine registration of tomography brain images specially when there is a great difference between intensity values of the reference template and the images.

#### Conflict of Interests

The authors declare that they do not have any conflict of interests.

#### Acknowledgments

This work was partly supported by the projects P07-TIC-02566, P09-TIC-4530, STM-NEUROCAD P11-TIC-7103, and TEC2012-34306. Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Abbott, AstraZeneca AB, Bayer Schering Pharma AG, Bristol-Myers Squibb, Eisai Global Clinical Development, Elan Corporation, Genentech, GE Healthcare, GlaxoSmithKline, Innogenetics, Johnson and Johnson, Eli Lilly and Co., Medpace, Inc., Merck and Co., Inc., Novartis AG, Pfizer Inc., F. Hoffman-La Roche, Schering-Plough, Synarc, Inc., and Wyeth, as well as nonprofit partners of the Alzheimer’s Association and Alzheimer’s Drug Discovery Foundation, with participation from the US Food and Drug Administration. Private sector contributions to ADNI are facilitated by the Foundation for the National Institutes of Health (http://www.fnih.org/). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of California, Los Angeles. This research was also supported by NIH Grants P30 AG010129 and K01 AG030514 and the Dana Foundation. Data used in the preparation of this paper were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (http://www.loni.ucla.edu/ADNI). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this work. ADNI investigators are avaliable online (complete listing available at http://www.loni.ucla.edu/ADNI/Collaboration/ADNI_Manuscript_Citations.pdf).

#### References

- L. S. Yin, L. Tang, G. Hamarneh et al., “Complexity and accuracy of image registration methods in SPECT-guided radiation therapy.,”
*Physics in medicine and biology*, vol. 55, no. 1, pp. 237–246, 2010. View at Scopus - D. Salas-Gonzalez, J. M. Górriz, J. Ramírez, A. Lassl, and C. G. Puntonet, “Improved Gauss-Newton optimisation methods in affine registration of SPECT brain images,”
*Electronics Letters*, vol. 44, no. 22, pp. 1291–1292, 2008. View at Publisher · View at Google Scholar · View at Scopus - D. B. Stout, E. Komisopoulou, A. F. Chatziioannou, and P. L. Chow, “A method of image registration for small animal, multi-modality imaging,”
*Physics in Medicine and Biology*, vol. 51, no. 2, pp. 379–390, 2006. View at Publisher · View at Google Scholar · View at Scopus - L. Zagorchev and A. Goshtasby, “A comparative study of transformation functions for nonrigid image registration,”
*IEEE Transactions on Image Processing*, vol. 15, no. 3, pp. 529–538, 2006. View at Publisher · View at Google Scholar · View at Scopus - P. A. Freeborough, R. P. Woods, and N. C. Fox, “Accurate registration of serial 3D MR brain images and its application to visualizing change in neurodegenerative disorders,”
*Journal of Computer Assisted Tomography*, vol. 20, no. 6, pp. 1012–1022, 1996. View at Publisher · View at Google Scholar · View at Scopus - M. Holden, D. L. G. Hill, E. R. E. Denton et al., “Voxel similarity measures for 3d serial mr image registration,”
*IEEE Transactions in Medical Imaging*, vol. 19, pp. 94–102, 2000. - S. Chang, B. Davis, E. Chaney, N. Strehl, M. Foskey, and L. Goyal, “Large deformation three-dimensional image registration in image-guided radiation therapy,”
*Physics in Medicine and Biology*, vol. 50, no. 24, pp. 5869–5892, 2005. View at Publisher · View at Google Scholar · View at Scopus - B. Zitová and J. Flusser, “Image registration methods: A survey,”
*Image and Vision Computing*, vol. 21, no. 11, pp. 977–1000, 2003. View at Publisher · View at Google Scholar · View at Scopus - D. L. G. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes, “Medical image registration,”
*Physics in Medicine and Biology*, vol. 46, no. 3, pp. R1–R45, 2001. View at Publisher · View at Google Scholar · View at Scopus - P. M. Thompson and A. W. Toga, “The role of image registration in brain mapping,”
*Image and Vision Computing*, vol. 19, no. 1-2, pp. 3–24, 2001. View at Scopus - S. Vandenberghea, Y. D'Asselera, R. V. de Wallea et al., “Iterative reconstruction algorithms in nuclear medicine,”
*Computerized Medical Imaging and Graphics*, vol. 25, pp. 105–111, 2001. - P. P. Bruyant, “Analytic and iterative reconstruction algorithms in SPECT,”
*Journal of Nuclear Medicine*, vol. 43, no. 10, pp. 1343–1358, 2002. View at Scopus - J. Zhou and L. M. Luo, “Sequential weighted least squares algorithm for PET image reconstruction,”
*Digital Signal Processing*, vol. 16, no. 6, pp. 735–745, 2006. View at Publisher · View at Google Scholar · View at Scopus - J. Ramírez, J. M. Górriz, M. Gomez-Río et al., “Effective emission tomography image reconstruction algorithms for spect data,” in
*Proceedings of the International Conference on Computational Science (ICCS '08)*, vol. 5101 of*Lecture Notes in Computer Science*, pp. 741–748, Springer, Berlin, Germany, 2008. - R. P. Woods, S. T. Grafton, C. J. Holmes, S. R. Cherry, and J. C. Mazziotta, “Automated image registration: I. General methods and intrasubject, intramodality validation,”
*Journal of Computer Assisted Tomography*, vol. 22, no. 1, pp. 139–152, 1998. View at Publisher · View at Google Scholar · View at Scopus - P. Thevenaz, T. Blu, and M. Unser, “Image interpolation and resamplingin,” in
*Handbook of Medical Imaging, Processing and Analysis*, I. N. Bankman, Ed., pp. 393–420, Academic Press, San Diego, Calif, USA, 2000. - P. Thevenaz, T. Blu, and M. Unser, “Interpolation revisited,”
*IEEE Transactions on Medical Imaging*, vol. 19, no. 7, pp. 739–758, 2000. View at Publisher · View at Google Scholar · View at Scopus - J. Nocedal and S. J. Wright,
*Numerical Optimization*, Springer, New York, NY, USA, 1999.