Translational Molecular Imaging Computing: Advances in Theories and Applications
View this Special IssueResearch Article  Open Access
Ming Li, Cheng Zhang, Chengtao Peng, Yihui Guan, Pin Xu, Mingshan Sun, Jian Zheng, "Smoothed Norm Regularization for SparseView XRay CT Reconstruction", BioMed Research International, vol. 2016, Article ID 2180457, 12 pages, 2016. https://doi.org/10.1155/2016/2180457
Smoothed Norm Regularization for SparseView XRay CT Reconstruction
Abstract
Lowdose computed tomography (CT) reconstruction is a challenging problem in medical imaging. To complement the standard filtered backprojection (FBP) reconstruction, sparse regularization reconstruction gains more and more research attention, as it promises to reduce radiation dose, suppress artifacts, and improve noise properties. In this work, we present an iterative reconstruction approach using improved smoothed (SL0) norm regularization which is used to approximate norm by a family of continuous functions to fully exploit the sparseness of the image gradient. Due to the excellent sparse representation of the reconstruction signal, the desired tissue details are preserved in the resulting images. To evaluate the performance of the proposed SL0 regularization method, we reconstruct the simulated dataset acquired from the SheppLogan phantom and clinical head slice image. Additional experimental verification is also performed with two real datasets from scanned animal experiment. Compared to the referenced FBP reconstruction and the total variation (TV) regularization reconstruction, the results clearly reveal that the presented method has characteristic strengths. In particular, it improves reconstruction quality via reducing noise while preserving anatomical features.
1. Introduction
Xray computed tomography has been widely used clinically for disease diagnosis, surgical guidance, perfusion imaging, and so forth. However, the massive Xray radiations during CT exams are likely to induce cancer and other diseases in patients [1, 2]. Therefore, the issue of lowdose computerized tomography reconstruction has been raised and attracted more and more research attention. As far as we know, there are two lowdose strategies widely studied for dose reduction: lowering Xray tube current values, measured by milliampere (mA) or milliampereseconds (mAs), or lowering Xray tube voltage, measured by kilovolt (KV), and lowering the number of sampling views during CT inspection. The strategy of regulation by mA or KV usually produces high noisy projection data. Thus, when the exposure dose is reduced, the images reconstructed using methods such as FBP suffer from increased artifacts and noise [3]. Diagnostic mistakes may appear in this case. The latter approach may also induce image artifacts due to limited sampling angles. As a result, the diagnostic value of the reconstructed images may be greatly degraded if inappropriate reconstruction approaches are applied.
To solve these problems, statistical reconstruction algorithms [4–9] attempt to produce high quality images by better modeling the projection data and the imaging geometry, which have shown superior performance compared to FBPtype reconstructions. Another path has been recently opened by compressed sensing (CS) with existing range of applications in medical imaging, for example, magnetic resonance imaging (MRI), bioluminescence tomography, optical coherence tomography, and lowdose CT reconstruction [10–24]. The CS theory reveals the potential capability of restoring sparse signals even if the Nyquist sampling theorem cannot be satisfied. Although the restricted isometry property (RIP) condition is not often satisfied in practice, CSbased reconstruction can yield more satisfying results than the traditional FBP algorithms in CT reconstruction [25]. Among several choices of sparse transforms, the gradient operator is motivated by the assumption that a preferable solution should be of bounded variation. It is known as total variation (TV) regularization, which favors solutions to be predominantly piecewise constant. TV has been widely used in the CT reconstruction community. However, TVregularized images may suffer from loss of detail features and contrast, resulting in the staircasing artifacts. It is well known that norm regularization can provide a sparser representation than the TV regularization ( norm) [26, 27]. However, the application of norm in image reconstruction is often a nondeterministic polynomialtime (NP) hard problem. In addition, norm is a nonconvex function in discontinuous form.
norm is defined as the total number of its nonzeros elements and has stronger effects in promoting sparse solutions, but this minimization issue is NP hard to solve in general. Then, a spontaneous question can be whether preferable results will be achieved if we use regularization forms between norm and norm. In this work, we present a smoothed (SL0) norm regularization model for sparseview Xray CT reconstruction. This SL0 regularization permits a dynamic regularization modulation and can achieve a good balance between the regularizations based on norm and norm. The paper is organized as follows. In Section 2, the SL0 norm model is firstly described and then the detailed optimization algorithm and the parameters setting are given. Section 3 includes the experiments conducted on the projection data from the SheppLogan phantom, the head slice image, and the scanned mouse. The reconstructed results demonstrate that the proposed SL0 regularization produces better images with legible anatomical features and preferable noise characteristic compared to those using TV regularization. Finally, the discussions and conclusions are given at the end of this paper.
2. Methods
2.1. Problem Formulation
The idea of SL0 norm originates from the effort of minimizing a concave function that approximates norm [26]. In order to address the discontinuity of norm, we then try to approximate this discontinuous function via a feasible continuous one and minimize it by means of a minimization algorithm for continuous functions (e.g., steepest decent method). The continuous function which is used to approximate norm should have a modulation parameter (say ), which determines approximation degree. Then the family of the cost functions is defined asnoting thator it can be approximately expressed asThen SL0 norm is defined asIn (4), is the length of reconstructed signals. From (2) and (3), we can obviously observe that when , the SL0 norm tends to be equivalent to norm. Therefore, we can find the minimal norm solution via minimizing (subject to ) with a very small value. As can be seen, the value of determines the smoothness of the function . The larger the value of is, the smoother is, resulting in worse approximation to norm; and the smaller the value of is, the closer the performance between and norm is.
Now, we recall the total variation (TV) norm of a 2dimensional array (), , which is defined as norm of the magnitudes of the discrete gradient:where ; is the attenuation coefficients to be reconstructed. If we use the proposed SL0 norm to enhance the sparsity of the image gradient, then the superior reconstruction behavior may be achieved. Therefore, to reconstruct the discrete Xray linear attenuation coefficients, we consider the following constrained optimization problem:where is the system matrix, used to model the CT imaging system; is the logtransformed projection measurements; is the tolerance used to enforce the data fidelity constraint, and it refers to Xray scatter, electronic noise, scanned materials, and a simplified data model. Sidky and Pan [11] have indicated that the best image rootsquarederror is achieved when chosen is around the actual error in the projection data. In practice, the real noise level of a system is usually unknown. Therefore, the optimal value of is selected when the reconstructed image with less artifacts and clearer anatomical structures is achieved.
2.2. Optimization Algorithm
In order to address the optimal solution of the proposed minimization problem, we try to assess the optimality of the solutions by analyzing the KarushKuhnTucker (KKT) conditions of (6) [28], which are the necessary conditions for optimality in nonlinear programming and can be derived through Lagrangian theory:and the partial derivative of the above Lagrangian function can be expressed aswhere the complimentary slackness isand the nonnegativity isIn conclusion, the optimal solutions can be firstly satisfied with the projection data fidelity constraint, and then corresponding should satisfy . Meanwhile, we intend to acquire the nonzero values of , and then corresponding should satisfy . To obtain the solutions meeting the above conditions, we need to solve the following optimization problem:
Sidky and Pan [11] present an optimization approach composed by an iterative projection operator called projectionontoconvexsets (POCS) and adaptive steepest descent procedure, which is suitable for dealing with large size constrained optimization problems. In this paper, a similar strategy is applied here. We choose POCS to be the iterative operator, which is an efficient iterative algorithm that can find images that satisfy the given convex constraints. POCS combines the ART technique and the image nonnegativity enforcement, and the proposed SL0 regularization is minimized via an iterative gradient descent of the cost function. The images are updated sequentially through the alternation of the POCS and gradient descent until the KarushKuhnTucker (KKT) conditions are satisfied. In practice, in order to reduce the computation time, we relax the KKT conditions or stop after a predefined iterative number. Under the current version of the proposed reconstruction algorithm, there is no rigidly theoretical proof on the convergence properties of the optimization procedure. However, the reconstructed results in the following experiments show that they are actually close to the optimal solution.
2.3. Parameters Selection
The implementation of the proposed SL0 regularization algorithm involves the choices of a series of parameters shown in Figure 1. The regularization parameter plays a crucial role in improving reconstruction quality. While we take a small value of , the function is highly unsmooth and includes many local minimums; hence finding its minimization is not easy. However, as increases, becomes smoother and includes less local minimums, and hence it is easier to minimize . In general, if we use a larger value of during the whole iterative process, the smoother reconstruction results can be achieved but the tissue details are worse. On the other hand, if we use a smaller value of σ during the whole iterative process, the optimization process may get trapped into local minimum, which will lead to artifacts and noisy reconstructions. Hence, our idea is to solve a sequence of optimization problems. At the first step, we solve (6) using a larger value of (such as ). Subsequently, we reduce by multiplying a small factor and then solve (6) again using . This time we initialize the reconstruction acquired in the last iteration. Due to the fact that decreases gradually, for each value of , the minimization algorithm starts with an initial solution close to the previous optimal value of (this is because both and have only slightly varied and consequently the minimization of new is potentially close to previous ). Hence, it is sufficient that the optimization algorithm is capable of escaping from getting trapped into local optimality and reaching the real minimum value for the small values, which offers the proximate norm solution. In our tests, we select and for all cases studied in this work. At the same time, the selection of should satisfy .
The parameters that control ART and the steepest gradient descent of objective function involve ART relaxation factor , which starts at 1.0 and slowly decreases to 0 as the iteration progresses; the steepest gradient descent relaxation factor starts at 0.2 and slowly decreases to 0 as the iteration progresses. The decreasing factors and are the keys to control the respective step lengths for ART and SL0 steepest descent. In the following experiments, we select and . The stopping criterion is reached if or the iterative process is stopped after a predefined maximum iteration number. In this paper, the maximum iterations of POCS are set to 30 and the maximum iterations of SL0 steepest descent are set to 20.
The above values are determined via experimental results, but we do not guarantee them to be optimal. However, the test results below demonstrate that the above parameters are satisfactory.
3. Experiments and Results
3.1. Data Acquisition
In order to characterize the superiority of the proposed SL0 regularization, we first study the performance of the proposed constrained optimization using the SheppLogan phantom and human head slice image. We used the SheppLogan phantom : with several ellipses standing for various anatomical tissues (see Figure 2(a)). The phantom was forward projected by MATLAB’s radon routine with 720 projections over rotation, yielding an angular spacing of . The second sample dataset was a human head slice obtained from a clinical diagnostic CT device in our cooperative hospital (see Figure 2(b)). The projection data were generated according to the fanbeam CT geometry. The forward projection parameters were defined as follows: the sourcetoaxis distance was 42.5 cm and the distance of sourcetodetector was 82.1 cm. The projection data of each view included 874 bins and the size of each element was 0.5 mm × 0.5 mm. And a total of 720 views were simulated during rotation. The images to be reconstructed were composed by 512 × 512 pixels with 0.4 mm × 0.4 mm. Furthermore, in order to evaluate the performance under noisy projection data, we simulated the noisy measurements according to the following model [29, 30]:where was the measured Xray intensity in bin and was the incident intensity. was the energydependent attenuation map; was the background electronic noise variance. In the simulation, we selected and . A monochromatic spectrum was assumed and the photon energy was set to 80 keV. Then the noisy projection data were obtained via logarithm transform.
(a)
(b)
In the second study, we evaluate the performance using two actual datasets from the scanned mouse experiments in our lab. The Xray tube voltage and tube current were set to 50 kV and 1 mA, respectively. The projection data were acquired under fanbeam mode. The distance between the detector and the center of rotation was 436.6 mm, while the sourcetoaxis distance was set to 221.9 mm. A total of 360 projections were acquired over rotation. The number of radial bins per view was 880, and the size of each bin was 0.15 × 0.15 mm^{2}. The reconstructed image size was 512 × 512 with an isotropic pixel size of 95.7 μm^{2}.
3.2. Results
We first start our evaluation with the SheppLogan phantom dataset, where the ground truth image is available. The images of the reconstruction are shown in Figure 3, where (a), (b), and (c) are for FBP, TV regularization, and SL0 regularization, respectively. Among them, FBP is applied to the entire projection data. However, we only select 120 views (equally spaced over rotation) for TV regularization and SL0 regularization. As can be seen in (a), (b), and (c) in Figure 3, we cannot observe significant difference between the reconstructions. In order to make the otherness of reconstructed results highlighted, the differences between the reconstructed images and the original image (OI) of the SheppLogan phantom are calculated. We can see in Figure 3 ((d), (e), and (f)) that the proposed SL0 regularization algorithm leads to the best image quality with effectively preserved margin details.
(a)
(b)
(c)
(d)
(e)
(f)
For the head slice dataset, the reconstructed images are shown in Figure 4 for all three reconstruction methods. The total of 720 views is completely selected for FBP reconstruction, and only 180 views of them are used for TV and SL0 regularization reconstruction. (a), (b), and (c) in Figure 4 illustrate the reconstructed results through FBP, TV, and SL0 using noiseless projections. Compared to the head slice sample, FBP reconstruction produces obvious image artifacts, but TV and SL0 reconstructions well reflect the sample image even with apparently undersampled measurements. (d), (e), and (f) in Figure 4 show the reconstructed results through FBP, TV, and SL0 using simulated noisy projections. When compared to the head slice sample, FBP and TV reconstructions introduce significant artifacts and the images appear to be very noisy. In this case, SL0 is superior to FBP and TV with vastly suppressed artifacts and better preserved image structures. Furthermore, we also compute the difference between the reconstructed image and the original image (OI) of the human head slice and the results are illustrated in Figure 5. It can be observed from Figure 5 that the SL0 produces minor differences between the reconstructed images and the reference image when compared to those of FBP and TV, which agrees with the observations from Figure 4.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
(c)
(d)
(e)
(f)
To further quantify the performance of the proposed SL0 method with FBP and TV methods, there are two criterions to evaluate the reconstructed image. One is the normalized mean absolute deviation (NMAD), defined asAnd the other one is the signaltonoise ratio (SNR), defined as
The values of the two criterions are presented in Table 1. Among these three algorithms, FBP produces the worst results with highest NMADs and lowest SNRs. In SheppLogan phantom experiments, both TV and SL0 generate the superior performances with teeny NMADs, which indicate that the reconstructions are comparatively close to the ground truth. In head slice image experiments, the quality of all the reconstructions is decreased with the simulated Poisson noise. However, in comparison to FBP and TV, SL0 generates the optimal results under all the situations, which are consistent with the observations in Figures 3, 4, and 5.

Finally, in Figure 6, we present the reconstructed results for scanned mouse data. The whole projection data are chosen for FBP reconstruction and only half of them are used for TV and SL0 regularization reconstruction. The reconstruction images are shown in Figure 6 for all the three reconstruction algorithms. A small area of interest is highlighted with a magnification factor of 2, and the zoomed images of this region are shown in the corresponding upper right corner. As can be seen, severe noise can be observed in the FBP results and the images appear to be blurry near to margin details. Compared to FBP, better preserved soft tissue edges and obviously reduced noise level can be observed in TV results. We can see in Figure 6 that the proposed SL0 method leads to the significantly improved image quality with effective noise suppression and tissue structure preservation in comparison to FBP and TV.
(a)
(b)
(c)
(d)
(e)
(f)
4. Discussion
In this paper, we propose smoothed norm optimization algorithm that exploits the gradient sparseness for lowdose CT imaging. The results demonstrate that the proposed method can effectively reduce noise and produce significantly improved images. Compared to TV regularization method, it is advantageous in terms of improved tissue edge properties, as well as lower level artifacts and image noise. The approximation of norm scheme via a family of continuous functions allows us to fully exploit the sparse assumption imposed on image gradient (IG) and generate a feasible method for sparseview CT reconstruction.
The sequentially updated values originate from the effort to find a measure that better approximates norm than the traditional TV regularization method ( norm). By altering parameter , we can obtain better control of the IG sparsity, which produces the superior anatomical features over the TV minimization. The regularization parameter σ plays a vital role in improving reconstruction quality. In order to acquire the better selection, we perform a series of reconstruction experiments with different values. As can be seen through Figures 7(a)–7(e), when we take , the cost function tends to give the closer behavior to norm, but the reconstructed image is the worst with severe artifacts and noise. However, as increases, the reconstruction images appear to improve gradually with obviously reduced noise level. In Figures 7(a)–7(e), we can also observe that the reconstructions with singular value during the whole iterative process cannot adequately suppress artifacts and preserve tissue structures (see the regions indicated by the red circles). In order to obtain the preferable reconstruction, the motivation of solving a sequence minimization strategy through orderly decreased value seems to be a suitable choice if both artifacts and noise suppression and margin details preservation are pursued. In the test, we select the initial value of as 0.7 and the decreased factor as 0.9. In Figures 7(a)–7(f), we can clearly observe that the sequential optimization via can lead to the optimal image quality with effectively suppressed artifacts and significant improved edge properties. Additionally, we also show line profiles along the marked yellow lines for ROIs of , 0.5, and 1.0 and proposed scenarios in Figures 8(a) and 8(b). It can be observed from Figure 8 that the proposed selection can produce image with less artifact and noise, which agrees with the observation in Figure 7.
(a)
(b)
(c)
(d)
(e)
(f)
(a)
(b)
A limitation of the proposed SL0 approach lies in the sparsity assumption on the IG, which is an ordinary problem for all the sparsitydriven iterative methods in CT reconstruction. For most numerical or physical phantoms, the reconstructed images are piecewise smooth and the sparsity assumption on the IG is valid. However, this will affect SL0 for human or animal slice reconstruction when images only have a merely low level of sparseness on the IG. Fortunately, the parameter allows us to expediently control the aggressiveness in encouraging sparsity with TV as regulates. Another potential problem is that when a 512 × 512 image is to be reconstructed, the SL0 algorithm takes around 65 s to finish one loop on a 2.67 GHz PC with 4 GB RAM under MATLAB R2011a. There are several ways to improve computational efficiency. One way is to select the conjugate gradient (CG) method to solve the reconstruction problems [28]. The CG algorithm is an improved steepest descent algorithm, with the descent direction determined by the current descent direction as well as the previous searching direction. In addition, the proposed algorithm can be accelerated via GPUbased technique to fulfill the clinical requirements [31].
5. Conclusion
In this work, we studied sparse regularization for Xray lowdose CT imaging using a smoothed norm (SL0) model. We investigated SL0 and compared its results with TV regularization and FBP on a numerical phantom and a clinical head slice as well as on two real datasets from scanned animal experiments. From the results, we have seen that the proposed SL0 regularization has yielded improved reconstructions with better performance in edge preservation and noise suppression compared to the other two methods. Nevertheless, practical application of the proposed approach still needs further validation using more actual clinical data. In the future, we will focus on addressing the limitations of our research described above. Furthermore, we will try to extend the SL0 regularization to handle other incomplete data reconstruction problems [32].
Competing Interests
The authors declare that there are no competing interests regarding the publication of this paper.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (no. 61201117), the National Program on Key Research and Development Project (no. 2016YFC0104500, no. 2016YFC0104505, no. 2016YFC0103500, and no. 2016YFC0103502), the Natural Science Foundation of Jiangsu Province (no. BK20151232), the Science and Technology Program of Suzhou (no. ZXY2013001), and the Youth Innovation Promotion Association CAS (no. 2014281). The authors are also grateful for the head CT images provided by the PET Center, Huashan Hospital, Fudan University, China.
References
 A. J. Einstein, M. J. Henzlova, and S. Rajagopalan, “Estimating risk of cancer associated with radiation exposure from 64slice computed tomography coronary angiography,” The Journal of the American Medical Association, vol. 298, no. 3, pp. 317–323, 2007. View at: Publisher Site  Google Scholar
 D. J. Brenner and E. J. Hall, “Computed tomography—an increasing source of radiation exposure,” The New England Journal of Medicine, vol. 357, no. 22, pp. 2277–2284, 2007. View at: Publisher Site  Google Scholar
 J. Hsieh, “Adaptive streak artifact reduction in computed tomography resulting from excessive xray photon noise,” Medical Physics, vol. 25, no. 11, pp. 2139–2147, 1998. View at: Publisher Site  Google Scholar
 I. A. Elbakri and J. A. Fessler, “Statistical image reconstruction for polyenergetic Xray computed tomography,” IEEE Transactions on Medical Imaging, vol. 21, no. 2, pp. 89–99, 2002. View at: Publisher Site  Google Scholar
 C. Zhang, T. Zhang, J. Zheng et al., “A model of regularization parameter determination in lowdose Xray CT reconstruction based on dictionary learning,” Computational and Mathematical Methods in Medicine, vol. 2015, Article ID 831790, 12 pages, 2015. View at: Publisher Site  Google Scholar
 Q. Xu, H. Y. Yu, X. Q. Mou, L. Zhang, J. Hsieh, and G. Wang, “Lowdose Xray CT reconstruction via dictionary learning,” IEEE Transactions on Medical Imaging, vol. 31, no. 9, pp. 1682–1697, 2012. View at: Publisher Site  Google Scholar
 C. O. Schirra, E. Roessl, T. Koehler et al., “Statistical reconstruction of material decomposed data in spectral CT,” IEEE Transactions on Medical Imaging, vol. 32, no. 7, pp. 1249–1257, 2013. View at: Publisher Site  Google Scholar
 G.H. Chen and Y. Li, “Synchronized multiartifact reduction with tomographic reconstruction (SMARTRECON): a statistical model based iterative image reconstruction method to eliminate limitedview artifacts and to mitigate the temporalaverage artifacts in timeresolved CT,” Medical Physics, vol. 42, no. 8, pp. 4698–4707, 2015. View at: Publisher Site  Google Scholar
 J. H. Cho and J. A. Fessler, “Regularization designs for uniform spatial resolution and noise properties in statistical image reconstruction for 3D Xray CT,” IEEE Transactions on Medical Imaging, vol. 34, no. 2, pp. 678–689, 2015. View at: Publisher Site  Google Scholar
 E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 E. Y. Sidky and X. Pan, “Image reconstruction in circular conebeam computed tomography by constrained, totalvariation minimization,” Physics in Medicine and Biology, vol. 53, no. 17, pp. 4777–4807, 2008. View at: Publisher Site  Google Scholar
 Y. Liu, Z. Liang, J. Ma et al., “Total variationstokes strategy for sparseview Xray CT image reconstruction,” IEEE Transactions on Medical Imaging, vol. 33, no. 3, pp. 749–763, 2014. View at: Publisher Site  Google Scholar
 M. Li, J. Zheng, S. Zhou, G. Yuan, and Z. Wu, “A constrained optimization reconstruction model for Xray computed tomography metal artifact suppression,” Journal of Medical Imaging and Health Informatics, vol. 5, no. 7, pp. 1543–1547, 2015. View at: Publisher Site  Google Scholar
 J. Zhang, Y. Chen, Y. Hu et al., “Gamma regularization based reconstruction for low dose CT,” Physics in Medicine and Biology, vol. 60, no. 17, article no. 6901, pp. 6901–6921, 2015. View at: Publisher Site  Google Scholar
 M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007. View at: Publisher Site  Google Scholar
 L. Fang, S. Li, R. P. McNabb et al., “Fast acquisition and reconstruction of optical coherence tomography images via sparse representation,” IEEE Transactions on Medical Imaging, vol. 32, no. 11, pp. 2034–2049, 2013. View at: Publisher Site  Google Scholar
 D. Zhu and C. Li, “Nonconvex regularizations in fluorescence molecular tomography for sparsity enhancement,” Physics in Medicine and Biology, vol. 59, no. 12, pp. 2901–2912, 2014. View at: Publisher Site  Google Scholar
 M. Wieczorek, J. Frikel, J. Vogel et al., “Xray computed tomography using curvelet sparse regularization,” Medical Physics, vol. 42, no. 4, pp. 1555–1565, 2015. View at: Publisher Site  Google Scholar
 S. Niu, Y. Gao, Z. Bian et al., “Sparseview xray CT reconstruction via total generalized variation regularization,” Physics in Medicine and Biology, vol. 59, no. 12, pp. 2997–3017, 2014. View at: Publisher Site  Google Scholar
 H. Zhang, L. Zhang, Y. Sun, and J. Zhang, “Projection domain denoising method based on dictionary learning for lowdose CT image reconstruction,” Journal of XRay Science and Technology, vol. 23, no. 5, pp. 567–578, 2015. View at: Publisher Site  Google Scholar
 M. Debatin and J. Hesser, “Accurate lowdose iterative CT reconstruction from few projections by Generalized Anisotropic Total Variation minimization for industrial CT,” Journal of XRay Science and Technology, vol. 23, no. 6, pp. 701–726, 2015. View at: Publisher Site  Google Scholar
 M. Ertas, I. Yildirim, M. Kamasak, and A. Akan, “Iterative image reconstruction using nonlocal means with total variation from insufficient projection data,” Journal of XRay Science and Technology, vol. 24, no. 1, pp. 1–8, 2016. View at: Publisher Site  Google Scholar
 C. Zhang, T. Zhang, M. Li, C. Peng, Z. Liu, and J. Zheng, “Lowdose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted leastsquares,” BioMedical Engineering OnLine, vol. 15, no. 1, pp. 1–21, 2016. View at: Publisher Site  Google Scholar
 S. Hong, Z. Quan, L. Yi et al., “Lowdose CT statistical iterative reconstruction via modified MRF regularization,” Computer Methods and Programs in Biomedicine, vol. 123, pp. 129–141, 2016. View at: Publisher Site  Google Scholar
 E. Y. Sidky, Y. Duchin, X. Pan, and C. Ullberg, “A constrained, totalvariation minimization algorithm for lowintensity xray CT,” Medical Physics, vol. 38, supplement 1, pp. S117–S125, 2011. View at: Publisher Site  Google Scholar
 H. Mohimani, M. BabaieZadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed l^{0} norm,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 289–301, 2009. View at: Google Scholar
 M. M. Hyder and K. Mahata, “An improved smoothed ${\mathcal{l}}^{0}$ approximation algorithm for sparse representation,” IEEE Transactions on Signal Processing, vol. 58, no. 4, pp. 2194–2205, 2010. View at: Publisher Site  Google Scholar
 S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004. View at: Publisher Site  MathSciNet
 X. Zhang, J. Wang, and L. Xing, “Metal artifact reduction in xray computed tomography (CT) by constrained optimization,” Medical Physics, vol. 38, no. 2, pp. 701–711, 2011. View at: Publisher Site  Google Scholar
 M. Li, J. Zheng, T. Zhang, Y. Guan, P. Xu, and M. Sun, “A priorbased metal artifact reduction algorithm for Xray CT,” Journal of XRay Science and Technology, vol. 23, no. 2, pp. 229–241, 2015. View at: Publisher Site  Google Scholar
 G. Pratx and L. Xing, “GPU computing in medical physics: a review,” Medical Physics, vol. 38, no. 5, pp. 2685–2697, 2011. View at: Publisher Site  Google Scholar
 X. Zhang and L. Xing, “Sequentially reweighted TV minimization for CT metal artifact reduction,” Medical Physics, vol. 40, no. 7, Article ID 071907, 2013. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2016 Ming Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.