Research Article  Open Access
Jiehua Zhu, Xiezhang Li, "A Smoothed Norm and Norm Regularization Algorithm for Computed Tomography", Journal of Applied Mathematics, vol. 2019, Article ID 8398035, 8 pages, 2019. https://doi.org/10.1155/2019/8398035
A Smoothed Norm and Norm Regularization Algorithm for Computed Tomography
Abstract
The nonmonotone alternating direction algorithm (NADA) was recently proposed for effectively solving a class of equalityconstrained nonsmooth optimization problems and applied to the total variation minimization in image reconstruction, but the reconstructed images suffer from the artifacts. Though by the norm regularization the edge can be effectively retained, the problem is NP hard. The smoothed norm approximates the norm as a limit of smooth convex functions and provides a smooth measure of sparsity in applications. The smoothed norm regularization has been an attractive research topic in sparse image and signal recovery. In this paper, we present a combined smoothed norm and norm regularization algorithm using the NADA for image reconstruction in computed tomography. We resolve the computation challenge resulting from the smoothed norm minimization. The numerical experiments demonstrate that the proposed algorithm improves the quality of the reconstructed images with the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the norm regularization in absence of the smoothed norm.
1. Introduction
Statistical and iterative reconstruction algorithms in computed tomography (CT) are widely applied since they yield more accurate results than analytic approaches for lowdose and limitedview reconstruction. These algorithms involve solving a linear system:where is an projection matrix, represents a 2D image to be reconstructed, is an additive noise with for some known , and is the noisy projection data. For limitedview reconstruction, the underdetermined system () has infinitely many solutions. An optimal solution representing the original image as well as possible is sought by iteration methods.
The theory of compressed sensing [1–3] has recently shown that signals and images that have sparse representations in some orthonormal bases can be reconstructed at high quality from much less data than what the Nyquist sampling theory [4] requires. In many cases in tomography, images can be approximately modeled to be essentially piecewise constant so the gradients are sparse. With the gradient operator as a sparse transform, compressed sensing provides a novel framework for CT image reconstruction [1, 3, 5]. The norm of a vector, the number of its nonzero elements, is an appropriate measurement for the sparsity of a vector. So the norm regularization might be an approach for the reconstruction of an image with sparse gradient. However, the norm is a nonconvex function and the norm regularization problem is NP hard [6]. A pseudoinverse transform of the discrete gradient and the iterative hard thresholding algorithm are used to address the issues of the norm [7]. The smoothed norm (norm) provides a smooth measure of sparsity and is applied in compressed sensing MRI imaging [8]. The norm is used to find the jointly sparse representation via the lowresolution image [9]. norm regularization model is proposed for sparseview Xray CT reconstruction [10]. Two new smoothed functions approximating the norm are proposed in the mechanism of reweighted regularization [11, 12]. An edgepreserving image reconstruction method based on regularized gradient prior for limitedangle CT is investigated [13]. The local and global minimizers of gradient regularized model with box constraints for image restoration are discussed [14]. A new regularization and wavelet tight framelets to suppress the slope artifacts in the limitedangle Xray CT reconstruction are addressed [15]. An image model based on and regularizations for the limitedangle CT is proposed, and the existence of a solution and a local convergence analysis under certain conditions are proved [16].
The norm regularization, known as total variation (TV) minimization [17], has been widely used in CT reconstruction. Under certain conditions, the norm minimization has a unique sparsest solution which is a good approximation to the original image [18]. However, the norm regularization provides less sparsity representation than the norm regularization and may lose some detailed features and contrast. In addition, the discrete gradient transformation does not satisfy the restricted isometry property required by the compressed sensing theory.
Researchers in this area have been seeking for other efficient and stable algorithms inspired by the compressed sensing theory, for example, solving an norm minimization with an norm constraint using the alternating direction method (ADM) [19, 20] and the nonmonotone alternating direction method (NADA) [21, 22]. Regularization including multiple norms is developed to address the drawbacks of the norm and norm. A combined norm and norm regularization minimization with an norm constraint using SART algorithm and the gradient decent method is proposed for sparseview CT image reconstruction in [23]. It is observed in the paper that the convergence is slow and the computation is time consuming because of the alternative minimization of the norm and norm.
The purposes of this paper are multifold:
(i) Presenting a combined norm and norm regularization algorithm for image reconstruction in CT: The proposed algorithm is unique in the following aspects: A new parameter is introduced to balance the norm and norm terms; twonorm regularization is combined in one Lagrangian object function to be minimized
(ii) Adopting a newly developed alternating direction method NADA to efficiently solve minimization
(iii) Resolving the computation challenge problem caused from the norm minimization
The numerical experiments demonstrate that the proposed algorithm improves the quality of the recovered images for the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the norm regularization in absence of the norm.
The rest of the paper is organized as follows. Section 2 includes the background and notations. In Section 3, a combined norm and norm regularization algorithm with the NADA is developed. Numerical experiments with the SheppLogan phantom and a cardiac image are presented in Section 4. Finally, Section 5 concludes the paper by discussions and conclusions.
2. Background and Notations
The TV minimization of an image for solving system (1) is mathematically represented as [17] where the total variation is the norm of the magnitude of the discrete gradients:Then a 2D image with sparse gradients and noisy measurements in CT can be reconstructed by solving the following norm minimization with an norm constraint:which can be implemented [24] byHere is a regularization parameter which controls the tradeoff between sparsity of gradients and accuracy of approximation.
For convenience, we denote as the forward difference of at a pixel in both horizontal and vertical directions; i.e., Then and minimization (5) is rewritten asWith the Lagrangian function [19, 20]minimization (7) is converted towhich can be solved by the ADM [19–22]. The ADM is a variant of the classic augmented Lagrangian method for optimization. It has been applied to solve different types of minimization problems for sparse solution recovery. The th step of the ADM for solving (9) involves the procedures
Minimization (10) and minimization (11) can be successfully solved using the framework of the NADA with a nonmonotone line search scheme recently developed in [21, 22]. The idea is outlined as follows.
(i) Choose a direction .
(ii) Select a step size uniformly bounded above such that , for .
(iii) Set .
(iv) Compute such that
Combined with the solution of (11), the sequence is bounded above by a monotonically nonincreasing sequence though itself is not decreasing. The advantage of the NADA lies in the fact that is not required to be differentiable while reducing the computation complexity and improving the efficiency. The convergence analysis of the NADA for solving a general model including minimization (11) can be found in [21].
The norm approximates the norm by a smooth function [8]. The norm of a vector , denoted by , is defined asIt is easy to see that .
A new approach for solving a more general minimization than the regularization minimization in [23] is proposed in the next section.
3. Norm and Norm Regularization Algorithm
In this section we present norm and norm regularization algorithm for CT reconstruction using the NADA. Consider the following constrained minimization problem for image reconstruction in CT:where the parameter is used to balance the two terms of the minimization. While the norm is used for the approximation accuracy, the norm is introduced in consideration of the sparsity of gradients. It is remarked that minimization (14) is an extension of minimization (4) () and the minimization () in [23].
3.1. New Approach
The term in (14) can be approximated by a smooth functionfor a small positive parameter . So minimization (14) can be approximately obtained by solvingTo deal with the constraint in (16), we rewrite (16) with a positive parameter as orUsing a positive parameter and Lagrange vectors , , we define a Lagrangian function Finally, minimization (18) is converted to the following minimization:It is obvious that is an extension of in (8). Consequently the NADA is adopted to solve minimization (20). The th iteration for solving (20) includes the following steps:
It is noted that the objective function in (19) has an extra term , depending only on the vector , compared with in (8). So the methods for the calculation of in (10) and in (12) can be applied to the calculation of in (21) and in (23). For example, in (23), . However, finding in (22) is a challenging problem.
3.2. Calculation of
In this subsection we address how to calculate in (22) for . Actually, for each , we need to solve for from the following equation:It follows from (24) that If is calculated thenThus, it suffices to determine from the following equation:Denote . If we set . If , it follows from (27) that is a positive real solution of the equationIt is obvious that there is a positive zero of between and and as .
Equation (28) can be rewritten as By substituting , the above equation becomesThus, we have shown the following lemma.
Lemma 1. Let and be given in (28) and (30), respectively. Then is a zero of on if and only if is a zero of on .
Before we develop a procedure to calculate a positive root of , we will list some properties of below.
It is known that With , the sign of on is the same as that of the functionon . By calculus, it is easy to see that is increasing on and decreasing on since . Thus, achieves a local maximum at . Therefore, if (or ) then the equation has two positive roots with . It follows that on and on . On the other hand, if , then on . We have the following monotonicity of .
Property 2. If then is increasing on and decreasing on and , where and are two positive zeros of . If then is always decreasing on .
The second derivative indicates the following features of .
Property 3. on and on . The graph of has an inflection point , . The function achieves a local minimum value at and a local maximum value at In addition, we have and .
Finally, we can determine the locations of zeros of by convergent Newton’s iterations.
Theorem 4. Let and be, respectively, given in (30) and (32), where . Let be a larger positive root of on for . Then a positive root of on can be calculated by Newton’s method. The convergence of Newton’s method is guaranteed if an initial guess is selected as follows.
Case 1. (i) , or (ii) , .
Choose and near with .
Case 2. (i) , or (ii) , .
Choose .
Proof. We identify an interval containing a positive root of on which and do not change signs for each subcase. Then the convergence of Newton’s method with an initial guess above is guaranteed.
Case 1. For subcase (i), there is a positive solution of on on which both and are negative. For subcase (ii), the inflection point of the graph of is on the upper half plane. So a positive zero is on on which both and are negative. So one should choose and near with for the convergence of Newton’s method.
Case 2. For subcase (i), there is a positive solution of on on which and . For subcase (ii), the inflection point is on the lower half plane. So a positive zero is on on which and . One could choose for the convergence of Newton iteration since is a lower bound of positive solutions of on in (28).
Thus, , , can be calculated from Lemma 1 and Theorem 4. Then in (26) can be obtained.
3.3. Algorithm
Based on the above discussion, the calculation process of the proposed regularization algorithm is summarized in Algorithm 1.
input  
initialize  
while  
1. update by the NADA  
1.1. decent direction  
1.2. step size such that  
1.3. update  
1.4. compute such that  
2. update  
for to  
2.1. set  
2.2. if then  
else  
select an initial guess from Theorem 4  
find a zero of in (30) by Newton’s iterations  
end if  
end for  
3. update  
4. if then exit  
5. update  
end while  
output and 
4. Numerical Experiments and Results
In this section, the proposed combined norm and norm regularization is compared with the regularization without norm for its performance. Both regularization algorithms are implemented using the NADA. The MATLAB code is developed based on the software package TVAL3 [22], and the numerical experiments are conducted on an Intel Core i7 3.40 GHz PC. The 2D SheppLogan phantom and a cardiac image [25] of size are tested. In each test, a random matrix () is used to create the same system for two regularization algorithms, where the noise 0.02mean(randn. The parameters are taken as , , for the SheppLogan phantom and , , for the cardiac image, respectively. The values of the parameter in the norm are decreasing at a ratio of with a starting value .
Experiments are conducted to compare the reconstruction by the two algorithms after the same number of iterations. The original/reconstructed SheppLogan phantom images and the original/reconstructed cardiac images after same numbers of iterations are shown in Figure 1. The reconstructed images show that the proposed norm regularization produces better images while taking about of the CPU time.
The quality of images is evaluated using the relative error of the reconstructed image in Frobenius norm. The rootmeansquare error (RMSE), the normalized rootmeansquare deviation (NRMSD), the normalized mean absolute deviation (NMAD), and the structural similarity index (SSIM) are also measured to reflect different aspects of the quality of the recovered images. RMSE evaluates the reconstruction quality on a pixelbypixel basis. NRMSD emphasizes large errors in a few pixels of the recovered image. NMAD focuses on small errors in the recovered image. SSIM compares the quality of the images using the original phantom image as a reference. A greater value of an SSIM indicates the better image quality. The experimental results from 100 tests are summarized in Table 1. The data in Table 1 indicates that the proposed norm regularization provides a better accuracy after the same iteration number.
(a)  
 
(b)  

Experiments are also conducted to compare iteration numbers and CPU time by the two algorithms when the same relative error is achieved and thus the image quality is same. The tolerance of the relative error to terminate the iteration is selected as 0.05. The average iteration numbers and CPU time from 100 tests of the norm regularization and norm only regularization are listed in Table 2. The comparison indicates that the CPU time is significantly reduced in the proposed algorithm while producing the same image quality. The graph of relative errors vs. the number of iterations for the two algorithms, shown in Figure 2, indicates that the norm regularization requires less iterations to achieve the same accuracy.

(a)
(b)
Both types of the numerical experiments demonstrate that the proposed norm regularization is superior to the regular norm regularization minimization.
5. Discussions and Conclusions
The application of the norm in CT reconstruction has become one of active research topics recently. A combined norm and norm regularization algorithm is proposed in this paper. The norm of a vector is the limit of a smooth convex function . In iterations of the proposed algorithm, the parameter is getting smaller and closer to zero. The smooth function in the objective function approaches , and the sparsity of the recovered image is fulfilled quickly. So the proposed algorithm improves the quality of the reconstructed images and the efficiency in terms of the iteration number and CPU time even with the extra computation from the added norm term. The numerical results demonstrate that the proposed algorithm improves the norm regularization with the NADA in reducing the computation time significantly. The effects of the weight parameter of the norm and the norm parameter are also tested. From our experiments, the parameter from 0.5 to 2 and the initial value from 0.05 to 0.9 almost do not affect the efficiency of the algorithm. The decreasing ratio of parameter between 0.8 and 0.95 is a good choice, but a ratio smaller than 0.7 is not recommended.
There are some aspects to be further investigated. It is a challenging problem to select good values of other parameters such as and in the Lagrangian function . The impact of these parameters on the performance of the algorithm will be further investigated.
Data Availability
The MATLAB numerical data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
 E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principle: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 E. Candes and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at: Publisher Site  Google Scholar
 D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 C. E. Shannon, “Communication in the presence of noise,” Proceedings of the IEEE, vol. 86, no. 2, pp. 447–457, 1998. View at: Publisher Site  Google Scholar
 E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 B. K. Natarajan, “Sparse approximate solutions to linear systems,” SIAM Journal on Computing, vol. 24, no. 2, pp. 227–234, 1995. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Sun and J. Tao, “Image reconstruction from few views by ℓ_{0}norm optimization,” Chinese Physics B, vol. 23, no. 7, Article ID 078703, 2014. View at: Publisher Site  Google Scholar
 H. Mohimani, M. BabaieZadeh, and C. Jutten, “A fast approach for overcomplete sparse decomposition based on smoothed ℓ_{0} norm,” IEEE Transactions on Signal Processing, vol. 57, no. 1, pp. 289–301, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 M. Rostami and Z. Wang, “Image superresolution based on sparsity prior via smooth ℓ_{0} norm,” in Proceedings of the Symposium on Advanced Intelligent Systems, Waterloo, Canada, December 2011. View at: Google Scholar
 M. Li, C. Zhang, C. Peng et al., “Smoothed ℓ_{0} norm regularization for sparseview Xray CT reconstruction,” BioMed Research International, vol. 2016, Article ID 2180457, 12 pages, 2016. View at: Publisher Site  Google Scholar
 L. Wang, X. Yin, H. Yue, and J. Xing, “A regularized weighted smoothed l0norm minimization method for underdetermined blind source separation,” Sensors, vol. 18, no. 12, 2018. View at: Publisher Site  Google Scholar
 X. Yin, L. Wang, H. Yue, and J. Xiang, “A new nonconvex regularized sparse reconstruction algorithm for compressed sensing magnetic resonance image recovery,” Progress in Electromagnetics Research C, vol. 87, pp. 241–253, 2018. View at: Publisher Site  Google Scholar
 W. Yu, C. Wang, and M. Huang, “Edgepreserving reconstruction from sparse projections of limitedangle computed tomography using ℓ_{0}regularized gradient prior,” Review of Scientific Instruments, vol. 88, no. 4, Article ID 043703, 2017. View at: Publisher Site  Google Scholar
 X. Feng, C. Wu, and C. Zeng, “On the local and global minimizers of ℓ_{0} gradient regularized model with box constraints for image restoration,” Inverse Problems, vol. 34, no. 9, Article ID 095007, 2018. View at: Publisher Site  Google Scholar
 L. Zhang, L. Zeng, and Y. Guo, “l0 regularization based on a prior image incorporated nonlocal means for limitedangle Xray CT reconstruction,” Journal of XRay Science and Technology, vol. 26, no. 3, pp. 481–498, 2018. View at: Publisher Site  Google Scholar
 C. Wang, L. Zeng, W. Yu, and L. Xu, “Existence and convergence analysis of l0 and ℓ_{2} regurizations for limitedangle CT reconstruction,” Inverse Problems and Imaging, vol. 12, no. 3, pp. 545–572, 2018. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted l_{1} minimization,” Journal of Fourier Analysis and Applications, vol. 14, no. 56, pp. 877–905, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 H. Zhang, W. Yin, and L. Cheng, “Necessary and sufficient conditions of solution uniqueness in ℓ_{1}norm minimization,” Journal of Optimization Theory and Applications, vol. 164, no. 1, pp. 109–122, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 M. Tao and F. Yang, “Alternating direction algorithms for total variation deconvolution in image reconstruction,” Optimization Online TR0918, Department of Mathematics, Nanjing University, 2009. View at: Google Scholar
 J. Yang and Y. Zhang, “Alternating direction algorithms for ℓ_{1}problems in compressive sensing,” SIAM Journal on Scientific Computing, vol. 33, no. 1, pp. 250–278, 2011. View at: Publisher Site  Google Scholar
 C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Computational Optimization and Applications, vol. 56, no. 3, pp. 507–530, 2013. View at: Publisher Site  Google Scholar
 C. Li, W. Yin, and Y. Zhang, “User’s guide for TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms,” CAAM Report, 2010. View at: Google Scholar
 H. Qi, Z. Chen, J. Guo, and L. Zhou, “Sparseview computed tomography image reconstruction via a combination of L1 and SL0 regularization,” BioMedical Materials and Engineering, vol. 26, pp. S1389–S1398, 2015. View at: Publisher Site  Google Scholar
 S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 33–61, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 TEAM RADS, http://teamrads.com/.
Copyright
Copyright © 2019 Jiehua Zhu and Xiezhang Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.