Abstract

The nonmonotone alternating direction algorithm (NADA) was recently proposed for effectively solving a class of equality-constrained nonsmooth optimization problems and applied to the total variation minimization in image reconstruction, but the reconstructed images suffer from the artifacts. Though by the -norm regularization the edge can be effectively retained, the problem is NP hard. The smoothed -norm approximates the -norm as a limit of smooth convex functions and provides a smooth measure of sparsity in applications. The smoothed -norm regularization has been an attractive research topic in sparse image and signal recovery. In this paper, we present a combined smoothed -norm and -norm regularization algorithm using the NADA for image reconstruction in computed tomography. We resolve the computation challenge resulting from the smoothed -norm minimization. The numerical experiments demonstrate that the proposed algorithm improves the quality of the reconstructed images with the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the -norm regularization in absence of the smoothed -norm.

1. Introduction

Statistical and iterative reconstruction algorithms in computed tomography (CT) are widely applied since they yield more accurate results than analytic approaches for low-dose and limited-view reconstruction. These algorithms involve solving a linear system:where is an projection matrix, represents a 2D image to be reconstructed, is an additive noise with for some known , and is the noisy projection data. For limited-view reconstruction, the underdetermined system () has infinitely many solutions. An optimal solution representing the original image as well as possible is sought by iteration methods.

The theory of compressed sensing [13] has recently shown that signals and images that have sparse representations in some orthonormal bases can be reconstructed at high quality from much less data than what the Nyquist sampling theory [4] requires. In many cases in tomography, images can be approximately modeled to be essentially piecewise constant so the gradients are sparse. With the gradient operator as a sparse transform, compressed sensing provides a novel framework for CT image reconstruction [1, 3, 5]. The -norm of a vector, the number of its nonzero elements, is an appropriate measurement for the sparsity of a vector. So the -norm regularization might be an approach for the reconstruction of an image with sparse gradient. However, the -norm is a nonconvex function and the -norm regularization problem is NP hard [6]. A pseudoinverse transform of the discrete gradient and the iterative hard thresholding algorithm are used to address the issues of the -norm [7]. The smoothed -norm (-norm) provides a smooth measure of sparsity and is applied in compressed sensing MRI imaging [8]. The -norm is used to find the jointly sparse representation via the low-resolution image [9]. -norm regularization model is proposed for sparse-view X-ray CT reconstruction [10]. Two new smoothed functions approximating the -norm are proposed in the mechanism of reweighted regularization [11, 12]. An edge-preserving image reconstruction method based on -regularized gradient prior for limited-angle CT is investigated [13]. The local and global minimizers of gradient regularized model with box constraints for image restoration are discussed [14]. A new regularization and wavelet tight framelets to suppress the slope artifacts in the limited-angle X-ray CT reconstruction are addressed [15]. An image model based on and regularizations for the limited-angle CT is proposed, and the existence of a solution and a local convergence analysis under certain conditions are proved [16].

The -norm regularization, known as total variation (TV) minimization [17], has been widely used in CT reconstruction. Under certain conditions, the -norm minimization has a unique sparsest solution which is a good approximation to the original image [18]. However, the -norm regularization provides less sparsity representation than the -norm regularization and may lose some detailed features and contrast. In addition, the discrete gradient transformation does not satisfy the restricted isometry property required by the compressed sensing theory.

Researchers in this area have been seeking for other efficient and stable algorithms inspired by the compressed sensing theory, for example, solving an -norm minimization with an -norm constraint using the alternating direction method (ADM) [19, 20] and the nonmonotone alternating direction method (NADA) [21, 22]. Regularization including multiple norms is developed to address the drawbacks of the -norm and -norm. A combined -norm and -norm regularization minimization with an -norm constraint using SART algorithm and the gradient decent method is proposed for sparse-view CT image reconstruction in [23]. It is observed in the paper that the convergence is slow and the computation is time consuming because of the alternative minimization of the -norm and -norm.

The purposes of this paper are multifold:

(i) Presenting a combined -norm and -norm regularization algorithm for image reconstruction in CT: The proposed algorithm is unique in the following aspects: A new parameter is introduced to balance the -norm and -norm terms; two-norm regularization is combined in one Lagrangian object function to be minimized

(ii) Adopting a newly developed alternating direction method NADA to efficiently solve minimization

(iii) Resolving the computation challenge problem caused from the -norm minimization

The numerical experiments demonstrate that the proposed algorithm improves the quality of the recovered images for the same cost of CPU time and reduces the computation time significantly while maintaining the same image quality compared with the -norm regularization in absence of the -norm.

The rest of the paper is organized as follows. Section 2 includes the background and notations. In Section 3, a combined -norm and -norm regularization algorithm with the NADA is developed. Numerical experiments with the Shepp-Logan phantom and a cardiac image are presented in Section 4. Finally, Section 5 concludes the paper by discussions and conclusions.

2. Background and Notations

The TV minimization of an image for solving system (1) is mathematically represented as [17] where the total variation is the -norm of the magnitude of the discrete gradients:Then a 2D image with sparse gradients and noisy measurements in CT can be reconstructed by solving the following -norm minimization with an -norm constraint:which can be implemented [24] byHere is a regularization parameter which controls the trade-off between sparsity of gradients and accuracy of approximation.

For convenience, we denote as the forward difference of at a pixel in both horizontal and vertical directions; i.e., Then and minimization (5) is rewritten asWith the Lagrangian function [19, 20]minimization (7) is converted towhich can be solved by the ADM [1922]. The ADM is a variant of the classic augmented Lagrangian method for optimization. It has been applied to solve different types of -minimization problems for sparse solution recovery. The th step of the ADM for solving (9) involves the procedures

Minimization (10) and minimization (11) can be successfully solved using the framework of the NADA with a nonmonotone line search scheme recently developed in [21, 22]. The idea is outlined as follows.

(i) Choose a direction .

(ii) Select a step size uniformly bounded above such that , for .

(iii) Set .

(iv) Compute such that

Combined with the solution of (11), the sequence is bounded above by a monotonically nonincreasing sequence though itself is not decreasing. The advantage of the NADA lies in the fact that is not required to be differentiable while reducing the computation complexity and improving the efficiency. The convergence analysis of the NADA for solving a general model including minimization (11) can be found in [21].

The -norm approximates the -norm by a smooth function [8]. The -norm of a vector , denoted by , is defined asIt is easy to see that .

A new approach for solving a more general minimization than the regularization minimization in [23] is proposed in the next section.

3. -Norm and -Norm Regularization Algorithm

In this section we present -norm and -norm regularization algorithm for CT reconstruction using the NADA. Consider the following constrained minimization problem for image reconstruction in CT:where the parameter is used to balance the two terms of the minimization. While the -norm is used for the approximation accuracy, the -norm is introduced in consideration of the sparsity of gradients. It is remarked that minimization (14) is an extension of minimization (4) () and the minimization () in [23].

3.1. New Approach

The term in (14) can be approximated by a smooth functionfor a small positive parameter . So minimization (14) can be approximately obtained by solvingTo deal with the constraint in (16), we rewrite (16) with a positive parameter as orUsing a positive parameter and Lagrange vectors , , we define a Lagrangian function Finally, minimization (18) is converted to the following minimization:It is obvious that is an extension of in (8). Consequently the NADA is adopted to solve minimization (20). The th iteration for solving (20) includes the following steps:

It is noted that the objective function in (19) has an extra term , depending only on the vector , compared with in (8). So the methods for the calculation of in (10) and in (12) can be applied to the calculation of in (21) and in (23). For example, in (23), . However, finding in (22) is a challenging problem.

3.2. Calculation of

In this subsection we address how to calculate in (22) for . Actually, for each , we need to solve for from the following equation:It follows from (24) that If is calculated thenThus, it suffices to determine from the following equation:Denote . If we set . If , it follows from (27) that is a positive real solution of the equationIt is obvious that there is a positive zero of between and and as .

Equation (28) can be rewritten as By substituting , the above equation becomesThus, we have shown the following lemma.

Lemma 1. Let and be given in (28) and (30), respectively. Then is a zero of on if and only if is a zero of on .

Before we develop a procedure to calculate a positive root of , we will list some properties of below.

It is known that With , the sign of on is the same as that of the functionon . By calculus, it is easy to see that is increasing on and decreasing on since . Thus, achieves a local maximum at . Therefore, if (or ) then the equation has two positive roots with . It follows that on and on . On the other hand, if , then on . We have the following monotonicity of .

Property 2. If then is increasing on and decreasing on and , where and are two positive zeros of . If then is always decreasing on .

The second derivative indicates the following features of .

Property 3. on and on . The graph of has an inflection point , . The function achieves a local minimum value at and a local maximum value at In addition, we have and .

Finally, we can determine the locations of zeros of by convergent Newton’s iterations.

Theorem 4. Let and be, respectively, given in (30) and (32), where . Let be a larger positive root of on for . Then a positive root of on can be calculated by Newton’s method. The convergence of Newton’s method is guaranteed if an initial guess is selected as follows.
Case 1. (i) , or (ii) , .
Choose and near with .
Case 2. (i) , or (ii) , .
Choose .

Proof. We identify an interval containing a positive root of on which and do not change signs for each subcase. Then the convergence of Newton’s method with an initial guess above is guaranteed.
Case 1. For subcase (i), there is a positive solution of on on which both and are negative. For subcase (ii), the inflection point of the graph of is on the upper half plane. So a positive zero is on on which both and are negative. So one should choose and near with for the convergence of Newton’s method.
Case 2. For subcase (i), there is a positive solution of on on which and . For subcase (ii), the inflection point is on the lower half plane. So a positive zero is on on which and . One could choose for the convergence of Newton iteration since is a lower bound of positive solutions of on in (28).

Thus, , , can be calculated from Lemma 1 and Theorem 4. Then in (26) can be obtained.

3.3. Algorithm

Based on the above discussion, the calculation process of the proposed regularization algorithm is summarized in Algorithm 1.

input
initialize
while
1. update by the NADA
  1.1. decent direction
  1.2. step size such that
  1.3. update
  1.4. compute such that
2. update
  for to
  2.1. set
  2.2. if then
       
    else
     select an initial guess from Theorem 4
     find a zero of in (30) by Newton’s iterations
       
    end if
  end for
3. update
4. if then exit
5. update
end while
output and

4. Numerical Experiments and Results

In this section, the proposed combined -norm and -norm regularization is compared with the regularization without -norm for its performance. Both regularization algorithms are implemented using the NADA. The MATLAB code is developed based on the software package TVAL3 [22], and the numerical experiments are conducted on an Intel Core i7 3.40 GHz PC. The 2D Shepp-Logan phantom and a cardiac image [25] of size are tested. In each test, a random matrix () is used to create the same system for two regularization algorithms, where the noise 0.02mean(randn. The parameters are taken as , , for the Shepp-Logan phantom and , , for the cardiac image, respectively. The values of the parameter in the -norm are decreasing at a ratio of with a starting value .

Experiments are conducted to compare the reconstruction by the two algorithms after the same number of iterations. The original/reconstructed Shepp-Logan phantom images and the original/reconstructed cardiac images after same numbers of iterations are shown in Figure 1. The reconstructed images show that the proposed -norm regularization produces better images while taking about of the CPU time.

The quality of images is evaluated using the relative error of the reconstructed image in Frobenius norm. The root-mean-square error (RMSE), the normalized root-mean-square deviation (NRMSD), the normalized mean absolute deviation (NMAD), and the structural similarity index (SSIM) are also measured to reflect different aspects of the quality of the recovered images. RMSE evaluates the reconstruction quality on a pixel-by-pixel basis. NRMSD emphasizes large errors in a few pixels of the recovered image. NMAD focuses on small errors in the recovered image. SSIM compares the quality of the images using the original phantom image as a reference. A greater value of an SSIM indicates the better image quality. The experimental results from 100 tests are summarized in Table 1. The data in Table 1 indicates that the proposed -norm regularization provides a better accuracy after the same iteration number.

Experiments are also conducted to compare iteration numbers and CPU time by the two algorithms when the same relative error is achieved and thus the image quality is same. The tolerance of the relative error to terminate the iteration is selected as 0.05. The average iteration numbers and CPU time from 100 tests of the -norm regularization and -norm only regularization are listed in Table 2. The comparison indicates that the CPU time is significantly reduced in the proposed algorithm while producing the same image quality. The graph of relative errors vs. the number of iterations for the two algorithms, shown in Figure 2, indicates that the -norm regularization requires less iterations to achieve the same accuracy.

Both types of the numerical experiments demonstrate that the proposed -norm regularization is superior to the regular -norm regularization minimization.

5. Discussions and Conclusions

The application of the -norm in CT reconstruction has become one of active research topics recently. A combined -norm and -norm regularization algorithm is proposed in this paper. The -norm of a vector is the limit of a smooth convex function . In iterations of the proposed algorithm, the parameter is getting smaller and closer to zero. The smooth function in the objective function approaches , and the sparsity of the recovered image is fulfilled quickly. So the proposed algorithm improves the quality of the reconstructed images and the efficiency in terms of the iteration number and CPU time even with the extra computation from the added -norm term. The numerical results demonstrate that the proposed algorithm improves the -norm regularization with the NADA in reducing the computation time significantly. The effects of the weight parameter of the -norm and the -norm parameter are also tested. From our experiments, the parameter from 0.5 to 2 and the initial value from 0.05 to 0.9 almost do not affect the efficiency of the algorithm. The decreasing ratio of parameter between 0.8 and 0.95 is a good choice, but a ratio smaller than 0.7 is not recommended.

There are some aspects to be further investigated. It is a challenging problem to select good values of other parameters such as and in the Lagrangian function . The impact of these parameters on the performance of the algorithm will be further investigated.

Data Availability

The MATLAB numerical data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.