Research Article  Open Access
Prior Image Guided Undersampled Dual Energy Reconstruction with Piecewise Polynomial Function Constraint
Abstract
Dual energy CT has the ability to give more information about the test object by reconstructing the attenuation factors under different energies. These images under different energies share identical structures but different attenuation factors. By referring to the fully sampled lowenergy image, we show that it is possible to greatly reduce the sampling rate of the highenergy image in order to lower dose. To compensate the attenuation factor difference between the two modalities, we use piecewise polynomial fitting to fit the lowenergy image to the highenergy image. During the reconstruction, the result is constrained by its distance to the fitted image, and the structural information thus can be preserved. An ASDPOCSbased optimization schedule is proposed to solve the problem, and numerical simulations are taken to verify the algorithm.
1. Introduction
Computed tomography has become an important nondestructive detection method in medicine, industry, and security. Typically CT scans the object by a single energy to reconstruct the attenuation factors in order to evaluate the density distributions inside the test object. However, some materials’ attenuation factors are close and hard to distinguish, which brings trouble for diagnosis. Since the attenuation factors are different under different Xray energies, DECT [1] has been brought about to enhance the material distinguishing ability in CT. Furthermore, atomic numbers, electron densities, or specific material equivalent densities can also be reconstructed from DECT for better visualization.
In DECT, the test object is scanned under different energies while keeping the object fixed. Thus, two different images, the lowenergy image and the highenergy image can be reconstructed independently from the two sets of projections, the lowenergy projections and the highenergy projections . Although there are various techniques for DECT reconstruction, for example, prereconstruction [2], postreconstruction [3], and iterative reconstruction [4], we concentrate on reconstructing and in this paper to demonstrate the mathematics of our method.
Dose has been concerned more and more recently with the increasing public awareness of the possible risks brought about by the radiation of CT scans. One of the most efficient ways to reduce dose is to reduce the sampling number. According to the concept of compressed sensing (CS) [5, 6], when the sampling numbers are reduced beneath the conventional required sampling rate, one can still accurately recover the signal by incorporating prior knowledge to the reconstruction. In the instance of DECT, and share identical structures because they are taken from the same object at almost the same time. One of the straightforward strategies for dose reduction is to undersample while keeping fully sampled. During the reconstruction of , the structural information extracted from the wellreconstructed can be made utility to improve the reconstruction quality.
Although the two images share the same structure, their attenuation factors are different under the two different energies, which leads to the grey scale difference in the reconstructed images. Similar situations can be found in multimodality imaging, where the grey scale values of the images are far different from each other. Bowsher prior has been used for MR/SPECT imaging to improve the reconstruction quality of SPECT [7, 8]. For DECT, Liu et al. used image segmentation on the fully sampled to reduce the number of variables during the reconstruction of [9]. Recently, some methods invoking CS have been proposed for DECT. Szczykutowicz and Chen applied PICCS to a slow kVp switching acquisition scheme and achieved good results [10, 11]. Xing and Zheng applied ARTTV on the ratio image of to for sparser presentation of the image [12].
It has been shown that invoking reference images in CSbased reconstruction is able to improve the reconstruction quality, but grey scale value compensation remains a challenge in DECT. For example, PICCS algorithm requires the reference image and the target image to be as close as possible, but the attenuation factor may differ widely under different energies. The ratio image method, on the other hand, requires that the changes in the grey scale values are proportional so that there are less edges in the ratio image and its gradient image is sparser. However, the change of the attenuation factor under different energies is unpredictable and the conditions required for the above methods may be violated sometimes.
Here, we propose a novel CSbased method for undersampled DECT reconstruction. The wellreconstructed lowenergy image is used to sparsify the undersampled highenergy image . To compensate for the grey value scale difference, both images are divided into patches and polynomial fitting is used on each pair of patches to fit the reference image to the target image . An l1norm constraint is applied on the distance from to the fitted image, and ASDPOCS [13] is adopted for the optimization. Since the piecewise polynomial fitting is able to almost precisely approximate the target image under most occasions, promising results can be achieved when the sampling rate is greatly reduced.
The method is very much motivated by one of our previous works [14], feature constrained compressed sensing (FCCS). In the FCCS method, linear space extracted from training image set is used as the prior knowledge rather than a single image. During the reconstruction, the distance between the target image and the prior space is used as the constraint for the result. In this paper, the prior space is generated from a single reference image by taking its powers. The constraint on the distance between the target image and the polynomial space is achieved by piecewise polynomial fitting.
The paper is organized as follows. In Section 2, the mathematical principles of the algorithm are presented. Numerical simulation results are shown in Section 3. Section 4 is conclusions and discussions.
2. Methodology
2.1. An Overview of the Method
The reconstruction formula of piecewise polynomial function constrained (PPFC) method is as follows: where is the piecewise polynomial fitting (PPF) which approximates by . The images are represented by the row vectors and . is the system matrix, and is the highenergy projections. After a highquality is reconstructed from the fully sampled lowenergy projections, it is used as the prior knowledge to iteratively solve form the undersampled according to (1) by ASDPOCS algorithm.
2.2. Image Approximation by Polynomial Fitting
Before introducing PPF, we will firstly show the least squares polynomial fitting method and some of its properties. In DECT, one of the ways to sparsify by is to find a transform which makes small. We will show that piecewise polynomial fitting is a good way to approximate by , which actually makes the difference between them sparse.
Image approximation by polynomial fitting is to solve the following equation: where where is defined as the element by element power of and is an allones vector. is the order of the polynomial and to are the corresponding coefficients which are determined by both and .
Equation (2) can be written into matrix form, which is where is an order Vandermonde matrix. means the th element of . To further reduce the scale of (4), we adopted the following method for least squares solution: where is an order Hankel matrix. Although (6) is not the most numerically stable solution for least squares problems, it is enough for loworder polynomial fitting problems. Efficient algorithms such as the GohbergKailathKoltracht (GKK) algorithm [15] can be used to solve (6). The approximation result can be expressed as
The approximation has some good properties. Firstly, loworder polynomials have the property of smoothness, which maps similar values in to similar values in . Thus, the fitting result will not be seriously influenced by noises. Secondly, if there are not too many kinds of values presented in the images, loworder polynomials can always guarantee a good approximation. For example, if there are only 5 pairs of values presented in and , an polynomial of order 4 is able to precisely map to . If the values distribute around a few points, the approximation error will also remain small. Furthermore, loworder polynomial fitting is numerical stable. Highorder least squares polynomial fitting requires high computational precision, and the result will be sensitive to rounding error. To solve this problem, we divide the image into small patches so that there are not too many kinds of grey values in each patch, and loworder polynomial fitting is applicable. Last but not least, the least squares polynomial fitting problem can be efficiently solved by the GKK algorithm, whose time and space complexities are and . Since the polynomial order is small, the method is both time and memory saving.
In our experiments, the order is set to 4, which is the minimum value required for fitting in our case. A small order makes computation with single floating point data type feasible. A higher order is applicable, but it means heavier computational load. It may be preferred when the test object is more complex; for example, the object contains more kinds of materials in a small region.
2.3. Piecewise Polynomial Fitting for the Entire Image
As we have stated, loworder polynomial fitting requires that the grey values of the image distribute around only a few points. Thus, the size of the image for fitting should be small for accuracy. This can be achieved by extracting patches from the original image and applying polynomial fitting on the sub images. On the other hand, the patches should be big enough to hold the structural information with them. There should also be adequate overlapping areas between adjacent patches or the difference between patches will lead to obvious blocklike artifacts in the approximated image. In our experiments, the patch size is selected as 16 by 16, and the offset of adjacent patches is 4 by 4, which leads to an overlapping size of 12 by 12.
Figure 1 shows the PPF results on noisy phantom with different patch sizes and offsets. Noisy is used to fit the noisy . Obvious blocklike artifacts can be observed when the patch size and offset are and . This is because in each small patch, the noise distributions are not identical, and without proper smoothing between adjacent patches, blocklike artifacts emerge. Furthermore, the patch size should be larger than the offset; otherwise, part of the image will be missing in the result (see the result at the bottom left corner of Figure 1).
When approximating by with PPF, for each pair of patches, the highenergy image patch is approximated by the lowenergy image patch according to (8). Then the entire image is generated by aggregating the patches. Define the patch selection matrix which satisfies
Then putting the patch back to its original position can be achieved by the transposed matrix . The aggregation can be expressed as
In (10), the division is element by element. is a Gaussian kernel which is used for smoothness during aggregation. is the normalized weighting for each patch. It is a diagonal matrix and which can be calculated by dividing the Gaussian kernel by the aggregated weightings.
The transpose of is
Since is a symmetric matrix, holds. Furthermore, is diagonal, so it also holds that . The transposed matrix can be written as
The transposed operator will be of future use and we will take a look at its properties here. The PPF matrix is realized by first fitting the patches and then weighted aggregation, but the transposed PPF matrix is realized by first weighting the patches, then fitting, and then unweighted aggregation. The algorithms for PPF and its transpose are shown in Algorithms 1 and 2.


is a small value in case the values in are almost zeros and the polynomial fitting algorithm fails. It is set at in our experiment.
2.4. Optimization by ASDPOCS
The formula for optimization is shown in (1). When adapting ASDPOCS algorithm to the problem, the only change to make is the first derivative of the objective function. The objective function is
Using the chain derivative rule, one can easily derive the gradient of the objective function: where is the element by element sign function: where is a small regularization factor near zero. In our experiments, this value is set at 0.01.
The initial value for the iterations is crucial for the convergence speed. For PPFC, 10 times SART is used to estimate and PPF is used to approximate the current image by . The approximation result is used as the initial value of the iterations.
The sparsity of the PPFC transform is shown in Figure 2. It can be seen that the PPFC transform on the wellreconstructed image is sparse, while the zero entities are much more in the image with artifacts.
3. Simulations
3.1. Simulation Method
Multienergy projections are used for the experiments. The spectrum of the Xray source is generated by the Monte Carlo method. Three different spectrums are used for the experiments: a 90 kVp spectrum, a 120 kVp spectrum, and a beamhardened 160 kVp spectrum. There are two different phantoms for testing, a cylinder phantom and a realistic dental phantom. The cylinder phantom is forward projected by the 90 kVp spectrum and the 160 kVp spectrum. The energy used for the dental phantom is 90 kVp and 120 kVp. The lowenergy image is reconstructed by FBP and the highenergy image is reconstructed by the proposed PPFC method with only 15 projections being uniformly sampled on 360 degrees. PICCS is also realized under the ASDPOCS framework as a reference method and its initial value is set at the energy normalized image . The simulation phantoms are shown in Figure 3.
(a)
(b)
3.2. Cylinder Phantom Experiments
The cylinder phantom is first forward projected by the 90 kVp and 160 kVp spectrums with fan beam geometry to get the high and low projections and . Then the lowenergy image is reconstructed from 900 projections by FBP. The region outside the ROI of FBP is set at zero. For the highenergy image , only 15 projections uniformly sampled across 360 degrees are used for reconstruction. Then both PPFC and PICCS are employed to reconstruct the phantom. Furthermore, the algorithms are also tested against noises. The noises are added to the projections by setting the photon number of the Xray source at per detector bin per projection. The scan geometry is shown in Table 1, and the simulation results are shown in Figure 4 and Table 2.

 
The RMSE is calculated by comparing to the noiseless result of FBP. The iterations numbers do not include the 10 times preiterations and are enough for convergence. The code is implemented by C++ on a machine with Intel Core i5 processor and 3 GB memory. 
In the results of PICCS, the cylinder made of 1% NaI solution is blurred. The reason is that in the reference image , the attenuation factor of the 1% NaI solution is larger than the background PMMA cylinder, but in the target image , the attenuation factor of the 1% NaI solution becomes smaller than the PMMA’s attenuation factor. The way PICCS uses the prior image by subtraction actually reduces the sparsity on the cylinder of 1% NaI solution. As its consequence, PICCS is not able to recover the cylinder of 1% NaI solution well while other cylinders are well preserved.
As for the results of PPFC, all the cylinders including the 1% NaI solution cylinder are well recovered. Furthermore, the results of the noisy projections have not degraded much comparing to the noiseless results. Thus, the experiments indicate that PPFC is compensating grey scale values difference and noise stable.
3.3. Realistic Phantom Experiments
The dental phantom is forward projected by the 90 kVp and 120 kVp spectrums with fan beam geometry. The lowenergy image is reconstructed by FBP from 720 projections. is downsampled to 15 projections and reconstructed by PPFC and PICCS. Noises with initial photons are also added to the projections. The scan geometry is shown in Table 3, and the corresponding results are shown in Figure 5 and Table 4.

 
The RMSE is compared to the noiseless results of FBP. 
The experiments on the realistic phantom show that PPFC is able to reconstruct objects with complicated structures as well as the simple objects. It also shows an advantage over PICCS on the aspect of RMSE.
4. Conclusion and Discussions
In this paper, we propose a CSbased method for undersampled DECT reconstruction with piecewise polynomial function constraint. The lowenergy image is reconstructed from fully sampled projections and the highenergy image is reconstructed from severely corrupted samples with the wellreconstructed lowenergy image as the reference. The proposed piecewise polynomial fitting method has good ability to compensate for the grey scale value difference between the high and lowenergy images. Under most conditions, the target image can be well approximated by the reference image using the PPF, which ensures the sparsity of the PPFC transform. The simulation results show that our method is both accurate and stable.
The drawback of the algorithm is that the piecewise polynomial fitting is still not efficient enough. However, the fitting of each patch is independent and the algorithm can be further accelerated by parallel computation. Furthermore, the algorithm has the potential to reconstruct the decomposition coefficients images in DECT, whose values are far from the values in the lowenergy image. Applying the method to a dualeffect or dualmaterial decomposition reconstruction is of future concerns.
Acknowledgments
This work was partly supported by the grants from NNSFC 10905030 and the Beijing Natural Science Foundation (research on key techniques of medical conebeam CT reconstruction from little data based on compressed sensing theory).
References
 P. Engler and W. D. Friedman, “Review of dualenergy computed tomography techniques,” Materials Evaluation, vol. 48, no. 5, pp. 623–629, 1990. View at: Google Scholar
 R. E. Alvarez and A. Macovski, “Energyselective reconstructions in Xray computerised tomography,” Physics in Medicine and Biology, vol. 21, no. 5, pp. 733–744, 1976. View at: Publisher Site  Google Scholar
 J. C. M. Steenbeek, C. van Kuijk, J. L. Grashuis, and R. B. van Panthaleon van Eck, “Selection of fatequivalent materials in postprocessing dualenergy quantitative CT,” Medical Physics, vol. 19, no. 4, pp. 1051–1056, 1992. View at: Publisher Site  Google Scholar
 J. A. Fessler, I. Elbakri, P. Sukovic, and N. H. Clinthorne, “Maximumlikelihood dualenergy tomographic image reconstruction,” in Medical Imaging 2002: Image Processing, vol. 4684 of Proceedings of SPIE, pp. 38–49, San Diego, Calif, USA, May 2002. View at: Publisher Site  Google Scholar
 E. J. Candès and J. K. Romberg, “Signal recovery from random projections,” in Computational Imaging III, vol. 5674 of Proceedings of SPIE, pp. 76–86, San Jose, Calif, USA, March 2005. View at: Publisher Site  Google Scholar
 E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006. View at: Publisher Site  Google Scholar
 A. Bousse, S. Pedemonte, D. Kazantsev, S. Ourselin, S. Arridge, and B. F. Hutton, “Weighted MRIBased bowsher priors for SPECT brain image reconstruction,” in Proceedings of the IEEE Nuclear Science Symposium Conference Record (NSS/MIC '10), pp. 3519–3522, Knoxville, Tenn, USA, November 2010. View at: Publisher Site  Google Scholar
 D. Kazantsev, A. Bousse, S. Pedemonte, S. R. Arridge, B. F. Hutton, and S. Ourselin, “Edge preserving bowsher prior with nonlocal weighting for 3D spect reconstruction,” in Proceedings of the 8th IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI '11), pp. 1158–1161, Chicago, Ill, USA, April 2011. View at: Publisher Site  Google Scholar
 Y. Liu, Z. Chen, L. Zhang, Y. Xing, J. Cheng, and Z. Wang, “Dual energy CT reconstruction method with reduced data,” in Proceedings of the 10th International Meeting on Fully ThreeDimensional Image Reconstruction in Radiology and Nuclear Medicine, pp. 280–283, Beijing, China, September 2009. View at: Google Scholar
 G.H. Chen, J. Tang, and S. Leng, “Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Medical Physics, vol. 35, no. 2, pp. 660–663, 2008. View at: Publisher Site  Google Scholar
 T. P. Szczykutowicz and G.H. Chen, “Dual energy CT using slow kVp switching acquisition and prior image constrained compressed sensing,” Physics in Medicine and Biology, vol. 55, no. 21, pp. 6411–6429, 2010. View at: Publisher Site  Google Scholar
 Y. Xing and P. Zheng, “A restoration method for incomplete data in DECT,” in Proceedings of the IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC '11), pp. 4306–4310, Valencia, Spain, October 2011. View at: Publisher Site  Google Scholar
 E. Y. Sidky and X. Pan, “Image reconstruction in circular conebeam computed tomography by constrained, totalvariation minimization,” Physics in Medicine and Biology, vol. 53, no. 17, pp. 4777–4807, 2008. View at: Publisher Site  Google Scholar
 D. Wu, L. Li, and L. Zhang, “Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database,” Physics in Medicine and Biology, vol. 58, pp. 4047–4070, 2013. View at: Google Scholar
 I. Gohberg, T. Kailath, and I. Koltracht, “Efficient solution of linear systems of equations with recursive structure,” Linear Algebra and Its Applications, vol. 80, pp. 81–113, 1986. View at: Google Scholar
Copyright
Copyright © 2013 Dufan Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.