International Journal of Biomedical Imaging

International Journal of Biomedical Imaging / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 8873865 |

Xiezhang Li, Guocan Feng, Jiehua Zhu, "An Algorithm of -Norm and -Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection", International Journal of Biomedical Imaging, vol. 2020, Article ID 8873865, 6 pages, 2020.

An Algorithm of -Norm and -Norm Regularization Algorithm for CT Image Reconstruction from Limited Projection

Academic Editor: Jyh-Cheng Chen
Received23 Mar 2020
Revised01 Jun 2020
Accepted06 Jul 2020
Published28 Aug 2020


The -norm regularization has attracted attention for image reconstruction in computed tomography. The -norm of the gradients of an image provides a measure of the sparsity of gradients of the image. In this paper, we present a new combined -norm and -norm regularization model for image reconstruction from limited projection data in computed tomography. We also propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding method. Numerical experiments indicate that this new algorithm makes much improvement by involving -norm regularization.

1. Introduction

Computed tomography (CT) has been widely applied in medical imaging and industry for over decades. In some applications, the reconstructions do not require a high-quality image, or complete projection data for exact reconstructions is not available because of the scanning conditions or radiation issue. CT reconstruction from limited projection data is of particular importance, and statistical and iterative reconstruction algorithms outperform the analytic approaches in these applications.

The theory of compressed sensing [1, 2] has provided a novel framework for CT image reconstruction from limited projection data with the prior knowledge of the sparse gradients for CT images. The total variation (TV) based regularization [3] for CT image reconstruction is connected to compressed sensing under the sparse transform of the gradient operator [4, 5]. The TV of an image is the -norm of the gradients of the image, so the TV minimization is also known as the -minimization.

The -norm-based regularization can greatly reduce the streak artifacts arising from few-view CT [6]. However, the results are over smoothing and lose some detailed features including contrast that causes edge blurred [7]. In addition, the -norm regularization does not provide a representation sparse enough. To overcome the drawback, many improved TV-based reconstruction algorithms have been developed. One of them is the reweighted minimization [8]. Algorithms for regularization involving multiple norms are also developed. For example, the alternating direction method (ADM) [9, 10] and the nonmonotone alternating direction algorithm (NADA) [11, 12] are proposed for solving an -norm minimization with an -norm constraint.

The -norm of an image vector, defined as the number of its nonzero elements, measures the sparsity of a vector appropriately. So the -norm-based regularization for CT represents the sparsity of the gradient image better than the -norm-based regularization and preserves the edge while suppressing the streak artifacts [13]. However, the -norm is a nonconvex function and the -norm regularization problem is NP hard [14]. So many variants of the regularization algorithms are developed. An edge-preserving image reconstruction method for limited-angle CT is investigated based on -norm regularized gradient prior [15]. An imaging reconstruction model for few-view CT based on sparse regularization is proposed [16]. A new regularization with wavelet tight framelets is addressed to suppress the slope artifacts in the limited-angle X-ray CT reconstruction [17]. Regularization involving multiple norms is also developed to address the individual drawbacks of the -norm and -norm. An image reconstruction model based on -norm and -norm regularization for the limited-angle CT is proposed [18]. Numerical experiments indicate that the algorithm has the advantage in suppressing slope artifacts. A combined smoothed -norm and -norm regularization algorithm using the NADA method for CT image reconstruction is proposed and demonstrated to have better performance than the -norm regularization [19].

In this paper, we will develop a new combined -norm and -norm regularization model for CT image reconstruction from limited projection data. The main contributions of this paper are as follows: (1)We generalize the -norm regularization to an -norm and -norm regularization for CT image reconstruction from limited projection data. The proposed model can achieve superior performance compared with the existing related methods for multiple reasons. Firstly, due to the -norm regularization term, the proposed model can reduce streak artifacts in limited-data CT. Secondly, due to the -norm regularization term, the oversmoothing from the -norm regularization term is improved. The proposed model is less smoothing, thus better edge-preserving, and provides a better sparsity representation. Thirdly, since the -norm is adopted without using the smoothed -norm approximation, the proposed model has the advantage in suppressing slope artifacts(2)We propose an algorithm in the algebraic framework to solve the optimization effectively using the nonmonotone alternating direction algorithm with hard thresholding (NADA-HT) method

The rest of the paper is organized as follows. In Section 2, we present a combined -norm and -norm-based regularization model and propose an algorithm for CT image reconstruction from limited projection data. Numerical experiments for a comparison of regularization models with/without the -norm and the discussion are given in Section 3.

2. Materials and Methods

The projection data in CT can be modelled as a linear system, where is an projection matrix, represents a 2D image to be reconstructed, an additive noise with for some known , and the noisy projection data. For limited data reconstruction, the underdetermined system () has infinitely many solutions. An optimal solution representing the original image as good as possible can be sought by minimizing the energy function with a TV regularization term [8]: where the total variation is the -norm of the magnitude of the discrete gradients,

Then, a 2D image with sparse gradients and noisy measurements in CT can be reconstructed by solving the following -norm minimization with an -norm constraint,

However, the image reconstructed by solving (4) has slope artifacts near the edge of the object for limited-angle projection data [20].

We consider the following minimization model where stands for the number of nonzero gradients , . The term , , makes the energy of reconstructed image to be minimized and reduces the ill-posedness of CT reconstruction [18].

For convenience, we denote as the forward difference of at a pixel in both horizontal and vertical directions, i.e., Then, for and .

Minimization (5) is equivalent to where the notation is set to be for nonzero , otherwise . The above minimization can be converted, with a parameter and vectors s, to

Adapting Lagrangian vectors , , we convert Minimization (8) into the following problem using the augmented Lagrangian method

For simplicity, we write , , and as vectors whose -th components are , , and , , respectively. Then, . Thus, Minimization (9) becomes

Next, we will develop an algorithm for solving Minimization (10). By convention, Minimization (10) can be solved by ADM iteratively. The -th step of the ADM involves three procedures,

First, Minimization (11) can be solved by ADM with the gradient decent method involving nonmonotone line search, for example, the NADA.

Next, we seek for a solution of (12). Minimization (12) is equivalent to finding

We introduce a hard thresholds (HT) operator on with the threshold defined as

A solution is provided in the following theorem.

Theorem 1. Let and . If , then a minimizer for Minimization (14) is given by

Proof. For simplicity, denote . The objective function in Minimization (14) is rewritten as

Without loss of generality, we assume that since in the case of .

For , .

For , . The minimizer is derived in the following two cases. (i)Case 1.

We claim that at . In fact, it follows from that a minimizer is a scalar multiple of . Let where due to the minimization problem. Then, the function reaches its minimum value at . The claim is proved. (ii)Case 2.

In this case, for all and . Thus, the minimizer is .

Combining the results for and , we conclude that if then, ; otherwise, .

Next, we look for conditions under which the inequality holds. Substituting into the inequality, we consider the equation in an unknown . The zeros of the equation are given by . So for , where for , and for . Therefore, for , if , we have , and consequently, ; otherwise, , and consequently, .

With a threshold , if , then ; otherwise, . Using the HT operator in (14), we have . We complete the proof of the theorem.

Remark. In practice, the condition for nonzero in Theorem 1 is fulfilled. Now we summarize our algorithm as follows.

(-norm and -norm regularization algorithm using NADA-HT method).
1. input
2. initialize
3. while
3.1   update by NADA
3.2.  for each integer , , update by HT method
     set ,
3.3  update
3.4  if error then output ; stop
Algorithm 1.

3. Results and Discussion

In this section, the performance of the proposed algorithm is compared with that of the -norm regularization algorithm. Both algorithms are implemented in MATLAB using the NADA and tested with the 2D Shepp-Logan phantom and a cardiac image [21] of size on an Intel Core i7 3.40 GHz PC. In each test, the same system is created, where () is a random matrix and the noise . The weight parameter is chosen as 0 and 1 for the -norm regularization and the proposed algorithm, respectively. Other two important parameters in the objective function are taken as , for the Shepp-Logan phantom and , for the cardiac image, respectively.

Experiments are conducted to compare the reconstruction by the two algorithms after the same number of iterations. The original and reconstructed images for two images after the same numbers of iterations are shown in Figure 1. The images demonstrate that the proposed algorithm yields much better reconstruction. The quality of reconstructed images is compared using several commonly used criteria: the relative error, the root-mean-square error (RMSE), the normalized root mean square deviation (NRMSD), the normalized mean absolute deviation (NMAD), and the structural similarity index (SSIM). The experimental results from 100 tests are summarized in Table 1. It shows that the proposed algorithm produces significantly improved evaluation measurement: 88%–167% for the Shepp-Logan phantom and 67%–114% for the cardiac image, respectively, while taking less CPU time. Overall, the new algorithm provides better accuracy after the same iteration number.


Shepp-Logan phantom, after 200 iterations



Cardiac image, after 500 iterations


Figure 2 shows the graph of relative error v.s. the number of iterations for the two algorithms. The curves indicate that the proposed algorithm requires much less iterations and time to achieve the same accuracy and yields much faster convergence. After a certain number of iteration numbers, the relative error of the proposed algorithm drops sharply, while the relative error of the -norm regularization is slowly decreasing and does not improve much over the iterations.

The proposed regularization model introduces an -norm term in addition to an -norm term. The -norm of a vector is the number of nonzero elements in the vector; thus, the -norm is appropriate for representing sparsity. Meanwhile, the -norm term makes the model less smoothing. Numerical experiments demonstrate that, in the proposed model, the edge is better preserved, and a better sparsity representation is provided. So, the proposed algorithm improves the quality and the efficiency of the reconstruction even with the extra computation from the added -norm term.

The effect of the weight parameter of the -norm is tested. From our experiments, the parameter from to almost does not not affect the efficiency of the algorithm. Selecting good values of other parameters , , and in Minimization (10) is a challenging problem and will be further investigated in the future.

High-quality CT image reconstruction from limited projection data is challenging. Learning based algorithms such as structure-aware sparse Bayesian learning [22] could yield improved performance in reconstructing tomographic images from limited data, since structural prior knowledge is exploited. Enhancing the proposed algorithm in a learning-based framework using prior knowledge will be the topic of future research.

Data Availability

The data used to support the findings of the study included within the article.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.


The authors would like to thank the reviewers and editors for their valuable comments and suggestions. It is acknowledged that G. Feng was partially supported by the National Natural Science Foundation of China (NSFC) under grant No. 61673018.


  1. E. Candes and W. Wakins, “An introduction to Compressive sampling,” IEEE Signal Processing Magzine, vol. 25, no. 2, pp. 21–30, 2008. View at: Publisher Site | Google Scholar
  2. C. E. Shannon, “Communication in the presence of noise,” Proceedings of the IEEE, vol. 86, no. 2, pp. 447–457, 1998. View at: Publisher Site | Google Scholar
  3. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, vol. 60, no. 1-4, pp. 259–268, 1992. View at: Publisher Site | Google Scholar
  4. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site | Google Scholar
  5. D. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site | Google Scholar
  6. H. Yu and G. Wang, “Compressed sensing based interior tomography,” Physics in Medicine and Biology, vol. 54, no. 9, pp. 2791–2805, 2009. View at: Publisher Site | Google Scholar
  7. Z. Tian, X. Jia, K. Yuan, T. Pan, and S. B. Jiang, “Low-dose CT reconstruction via edge-preserving total variation regularization,” Physics in Medicine and Biology, vol. 56, no. 18, pp. 5949–5967, 2011. View at: Publisher Site | Google Scholar
  8. E. Candes, W. Wakins, and S. Boyd, “Enhancing Sparsity by Reweighted Minimization,” Journal of Fourier Analysis Application, vol. 14, no. 5-6, pp. 877–905, 2008. View at: Publisher Site | Google Scholar
  9. M. Tao and J. Yang, Alternating direction algorithms for total variation deconvolution in image reconstruction, Optimization Online, 2009.
  10. J. Yang and Y. Zhang, “Alternating direction algorithms for -problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at: Google Scholar
  11. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Computational Optimization and Applications, vol. 56, no. 3, pp. 507–530, 2013. View at: Publisher Site | Google Scholar
  12. C. Li, W. Yin, and Y. Zhang, User’s Guide for TVAL3: TV Minimization by Augmented Lagrangian and Alternating Direction Algorithms, CAAM Report, 2010.
  13. W. Yu and L. Zeng, “ gradient minimization based image reconstruction of limited angle computed tomography,” PLOS ONE, vol. 10, no. 7, 2015. View at: Google Scholar
  14. B. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at: Publisher Site | Google Scholar
  15. W. Yu, C. Wang, and M. Huang, “Edge-preserving reconstruction from sparse projections of limited-angle computed tomography using -regularized gradient prior,” Review of Scientific Instruments, vol. 88, no. 4, p. 043703, 2017. View at: Publisher Site | Google Scholar
  16. W. Yu, C. Wang, X. Nie, M. Huang, and L. Wu, “Image reconstruction for few-view computed tomography based on sparse regularization,” Procedia Computer Science, vol. 107, pp. 808–813, 2017. View at: Google Scholar
  17. L. Zhang, L. Zeng, and Y. Guo, “ regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction,” Journal of X-Ray Science and Technology, vol. 26, no. 3, pp. 481–498, 2018. View at: Publisher Site | Google Scholar
  18. C. Wang, L. Zeng, W. Yu, and L. Xu, “Existence and convergence analysis of and regularizations for limited-angle CT reconstruction,” Inverse Problems and Imaging, vol. 12, no. 3, pp. 545–572, 2018. View at: Google Scholar
  19. J. Zhu and X. Li, “A Smoothed -norm and -norm Regularization Algorithm for Computed Tomography,” Journal of Applied Mathematics, vol. 2019, Article ID 8398035, 8 pages, 2019. View at: Publisher Site | Google Scholar
  20. L. Zeng, J. Guo, and B. Liu, “Limited-angle cone-beam computed tomography image reconstruction by total variation minimization,” Journal of Inverse and Ill-Posed Problems, vol. 21, no. 6, pp. 735–754, 2013. View at: Google Scholar
  21. TEAM RADS,
  22. S. Liu, Y. Huang, H. Wu, C. Tan, and J. Jia, “Efficient multi-task structure-aware sparse Bayesian learning for frequency-difference electrical impedance tomography,” IEEE Transactions on Industrial Informatics, vol. 16, p. 1, 2020. View at: Publisher Site | Google Scholar

Copyright © 2020 Xiezhang Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.