Research Article  Open Access
Xiezhang Li, Guocan Feng, Jiehua Zhu, "An Algorithm of Norm and Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection", International Journal of Biomedical Imaging, vol. 2020, Article ID 8873865, 6 pages, 2020. https://doi.org/10.1155/2020/8873865
An Algorithm of Norm and Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection
Abstract
The norm regularization has attracted attention for image reconstruction in computed tomography. The norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined norm and norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving norm regularization.
1. Introduction
Computed tomography (CT) has been widely applied in medical imaging and industry for over decades. In some applications, the reconstructions do not require a highquality image, or complete projection data for exact reconstructions is not available because of the scanning conditions or radiation issue. CT reconstruction from limited projection data is of particular importance, and statistical and iterative reconstruction algorithms outperform the analytic approaches in these applications.
The theory of compressed sensing [1, 2] has provided a novel framework for CT image reconstruction from limited projection data with the prior knowledge of the sparse gradients for CT images. The total variation (TV) based regularization [3] for CT image reconstruction is connected to compressed sensing under the sparse transform of the gradient operator [4, 5]. The TV of an image is the norm of the gradients of the image, so the TV minimization is also known as the minimization.
The normbased regularization can greatly reduce the streak artifacts arising from fewview CT [6]. However, the results are over smoothing and lose some detailed features including contrast that causes edge blurred [7]. In addition, the norm regularization does not provide a representation sparse enough. To overcome the drawback, many improved TVbased reconstruction algorithms have been developed. One of them is the reweighted minimization [8]. Algorithms for regularization involving multiple norms are also developed. For example, the alternating direction method (ADM) [9, 10] and the nonmonotone alternating direction algorithm (NADA) [11, 12] are proposed for solving an norm minimization with an norm constraint.
The norm of an image vector, defined as the number of its nonzero elements, measures the sparsity of a vector appropriately. So the normbased regularization for CT represents the sparsity of the gradient image better than the normbased regularization and preserves the edge while suppressing the streak artifacts [13]. However, the norm is a nonconvex function and the norm regularization problem is NP hard [14]. So many variants of the regularization algorithms are developed. An edgepreserving image reconstruction method for limitedangle CT is investigated based on norm regularized gradient prior [15]. An imaging reconstruction model for fewview CT based on sparse regularization is proposed [16]. A new regularization with wavelet tight framelets is addressed to suppress the slope artifacts in the limitedangle Xray CT reconstruction [17]. Regularization involving multiple norms is also developed to address the individual drawbacks of the norm and norm. An image reconstruction model based on norm and norm regularization for the limitedangle CT is proposed [18]. Numerical experiments indicate that the algorithm has the advantage in suppressing slope artifacts. A combined smoothed norm and norm regularization algorithm using the NADA method for CT image reconstruction is proposed and demonstrated to have better performance than the norm regularization [19].
In this paper, we will develop a new combined norm and norm regularization model for CT image reconstruction from limited projection data. The main contributions of this paper are as follows: (1)We generalize the norm regularization to an norm and norm regularization for CT image reconstruction from limited projection data. The proposed model can achieve superior performance compared with the existing related methods for multiple reasons. Firstly, due to the norm regularization term, the proposed model can reduce streak artifacts in limiteddata CT. Secondly, due to the norm regularization term, the oversmoothing from the norm regularization term is improved. The proposed model is less smoothing, thus better edgepreserving, and provides a better sparsity representation. Thirdly, since the norm is adopted without using the smoothed norm approximation, the proposed model has the advantage in suppressing slope artifacts(2)We propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding (NADAHT) method
The rest of the paper is organized as follows. In Section 2, we present a combined norm and normbased regularization model and propose an algorithm for CT image reconstruction from limited projection data. Numerical experiments for a comparison of regularization models with/without the norm and the discussion are given in Section 3.
2. Materials and Methods
The projection data in CT can be modelled as a linear system, where is an projection matrix, represents a 2D image to be reconstructed, an additive noise with for some known , and the noisy projection data. For limited data reconstruction, the underdetermined system () has infinitely many solutions. An optimal solution representing the original image as good as possible can be sought by minimizing the energy function with a TV regularization term [8]: where the total variation is the norm of the magnitude of the discrete gradients,
Then, a 2D image with sparse gradients and noisy measurements in CT can be reconstructed by solving the following norm minimization with an norm constraint,
However, the image reconstructed by solving (4) has slope artifacts near the edge of the object for limitedangle projection data [20].
We consider the following minimization model where stands for the number of nonzero gradients , . The term , , makes the energy of reconstructed image to be minimized and reduces the illposedness of CT reconstruction [18].
For convenience, we denote as the forward difference of at a pixel in both horizontal and vertical directions, i.e., Then, for and .
Minimization (5) is equivalent to where the notation is set to be for nonzero , otherwise . The above minimization can be converted, with a parameter and vectors s, to
Adapting Lagrangian vectors , , we convert Minimization (8) into the following problem using the augmented Lagrangian method
For simplicity, we write , , and as vectors whose th components are , , and , , respectively. Then, . Thus, Minimization (9) becomes
Next, we will develop an algorithm for solving Minimization (10). By convention, Minimization (10) can be solved by ADM iteratively. The th step of the ADM involves three procedures,
First, Minimization (11) can be solved by ADM with the gradient decent method involving nonmonotone line search, for example, the NADA.
Next, we seek for a solution of (12). Minimization (12) is equivalent to finding
We introduce a hard thresholds (HT) operator on with the threshold defined as
A solution is provided in the following theorem.
Theorem 1. Let and . If , then a minimizer for Minimization (14) is given by
Proof. For simplicity, denote . The objective function in Minimization (14) is rewritten as
Without loss of generality, we assume that since in the case of .
For , .
For , . The minimizer is derived in the following two cases. (i)Case 1.
We claim that at . In fact, it follows from that a minimizer is a scalar multiple of . Let where due to the minimization problem. Then, the function reaches its minimum value at . The claim is proved. (ii)Case 2.
In this case, for all and . Thus, the minimizer is .
Combining the results for and , we conclude that if then, ; otherwise, .
Next, we look for conditions under which the inequality holds. Substituting into the inequality, we consider the equation in an unknown . The zeros of the equation are given by . So for , where for , and for . Therefore, for , if , we have , and consequently, ; otherwise, , and consequently, .
With a threshold , if , then ; otherwise, . Using the HT operator in (14), we have . We complete the proof of the theorem.
Remark. In practice, the condition for nonzero in Theorem 1 is fulfilled. Now we summarize our algorithm as follows.
 
Algorithm 1. 
3. Results and Discussion
In this section, the performance of the proposed algorithm is compared with that of the norm regularization algorithm. Both algorithms are implemented in MATLAB using the NADA and tested with the 2D SheppLogan phantom and a cardiac image [21] of size on an Intel Core i7 3.40 GHz PC. In each test, the same system is created, where () is a random matrix and the noise . The weight parameter is chosen as 0 and 1 for the norm regularization and the proposed algorithm, respectively. Other two important parameters in the objective function are taken as , for the SheppLogan phantom and , for the cardiac image, respectively.
Experiments are conducted to compare the reconstruction by the two algorithms after the same number of iterations. The original and reconstructed images for two images after the same numbers of iterations are shown in Figure 1. The images demonstrate that the proposed algorithm yields much better reconstruction. The quality of reconstructed images is compared using several commonly used criteria: the relative error, the rootmeansquare error (RMSE), the normalized root mean square deviation (NRMSD), the normalized mean absolute deviation (NMAD), and the structural similarity index (SSIM). The experimental results from 100 tests are summarized in Table 1. It shows that the proposed algorithm produces significantly improved evaluation measurement: 88%–167% for the SheppLogan phantom and 67%–114% for the cardiac image, respectively, while taking less CPU time. Overall, the new algorithm provides better accuracy after the same iteration number.
(a)  
 
(b)  

Figure 2 shows the graph of relative error v.s. the number of iterations for the two algorithms. The curves indicate that the proposed algorithm requires much less iterations and time to achieve the same accuracy and yields much faster convergence. After a certain number of iteration numbers, the relative error of the proposed algorithm drops sharply, while the relative error of the norm regularization is slowly decreasing and does not improve much over the iterations.
(a) SheppLogan phantom
(b) Cardiac image
The proposed regularization model introduces an norm term in addition to an norm term. The norm of a vector is the number of nonzero elements in the vector; thus, the norm is appropriate for representing sparsity. Meanwhile, the norm term makes the model less smoothing. Numerical experiments demonstrate that, in the proposed model, the edge is better preserved, and a better sparsity representation is provided. So, the proposed algorithm improves the quality and the efficiency of the reconstruction even with the extra computation from the added norm term.
The effect of the weight parameter of the norm is tested. From our experiments, the parameter from to almost does not not affect the efficiency of the algorithm. Selecting good values of other parameters , , and in Minimization (10) is a challenging problem and will be further investigated in the future.
Highquality CT image reconstruction from limited projection data is challenging. Learning based algorithms such as structureaware sparse Bayesian learning [22] could yield improved performance in reconstructing tomographic images from limited data, since structural prior knowledge is exploited. Enhancing the proposed algorithm in a learningbased framework using prior knowledge will be the topic of future research.
Data Availability
The data used to support the findings of the study included within the article.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
Acknowledgments
The authors would like to thank the reviewers and editors for their valuable comments and suggestions. It is acknowledged that G. Feng was partially supported by the National Natural Science Foundation of China (NSFC) under grant No. 61673018.
References
 E. Candes and W. Wakins, “An introduction to Compressive sampling,” IEEE Signal Processing Magzine, vol. 25, no. 2, pp. 21–30, 2008. View at: Publisher Site  Google Scholar
 C. E. Shannon, “Communication in the presence of noise,” Proceedings of the IEEE, vol. 86, no. 2, pp. 447–457, 1998. View at: Publisher Site  Google Scholar
 L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 14, pp. 259–268, 1992. View at: Publisher Site  Google Scholar
 E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site  Google Scholar
 D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar
 H. Yu and G. Wang, “Compressed sensing based interior tomography,” Physics in Medicine and Biology, vol. 54, no. 9, pp. 2791–2805, 2009. View at: Publisher Site  Google Scholar
 Z. Tian, X. Jia, K. Yuan, T. Pan, and S. B. Jiang, “Lowdose CT reconstruction via edgepreserving total variation regularization,” Physics in Medicine and Biology, vol. 56, no. 18, pp. 5949–5967, 2011. View at: Publisher Site  Google Scholar
 E. Candes, W. Wakins, and S. Boyd, “Enhancing Sparsity by Reweighted Minimization,” Journal of Fourier Analysis Application, vol. 14, no. 56, pp. 877–905, 2008. View at: Publisher Site  Google Scholar
 M. Tao and J. Yang, Alternating direction algorithms for total variation deconvolution in image reconstruction, Optimization Online, 2009.
 J. Yang and Y. Zhang, “Alternating direction algorithms for problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at: Google Scholar
 C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Computational Optimization and Applications, vol. 56, no. 3, pp. 507–530, 2013. View at: Publisher Site  Google Scholar
 C. Li, W. Yin, and Y. Zhang, User’s Guide for TVAL3: TV Minimization by Augmented Lagrangian and Alternating Direction Algorithms, CAAM Report, 2010.
 W. Yu and L. Zeng, “ gradient minimization based image reconstruction of limited angle computed tomography,” PLOS ONE, vol. 10, no. 7, 2015. View at: Google Scholar
 B. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at: Publisher Site  Google Scholar
 W. Yu, C. Wang, and M. Huang, “Edgepreserving reconstruction from sparse projections of limitedangle computed tomography using regularized gradient prior,” Review of Scientific Instruments, vol. 88, no. 4, p. 043703, 2017. View at: Publisher Site  Google Scholar
 W. Yu, C. Wang, X. Nie, M. Huang, and L. Wu, “Image reconstruction for fewview computed tomography based on sparse regularization,” Procedia Computer Science, vol. 107, pp. 808–813, 2017. View at: Google Scholar
 L. Zhang, L. Zeng, and Y. Guo, “ regularization based on a prior image incorporated nonlocal means for limitedangle Xray CT reconstruction,” Journal of XRay Science and Technology, vol. 26, no. 3, pp. 481–498, 2018. View at: Publisher Site  Google Scholar
 C. Wang, L. Zeng, W. Yu, and L. Xu, “Existence and convergence analysis of and regularizations for limitedangle CT reconstruction,” Inverse Problems and Imaging, vol. 12, no. 3, pp. 545–572, 2018. View at: Google Scholar
 J. Zhu and X. Li, “A Smoothed norm and norm Regularization Algorithm for Computed Tomography,” Journal of Applied Mathematics, vol. 2019, Article ID 8398035, 8 pages, 2019. View at: Publisher Site  Google Scholar
 L. Zeng, J. Guo, and B. Liu, “Limitedangle conebeam computed tomography image reconstruction by total variation minimization,” Journal of Inverse and IllPosed Problems, vol. 21, no. 6, pp. 735–754, 2013. View at: Google Scholar
 TEAM RADS, http://teamrads.com/.
 S. Liu, Y. Huang, H. Wu, C. Tan, and J. Jia, “Efficient multitask structureaware sparse Bayesian learning for frequencydifference electrical impedance tomography,” IEEE Transactions on Industrial Informatics, vol. 16, p. 1, 2020. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2020 Xiezhang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.