Abstract

The alternating direction method is widely applied in total variation image restoration. However, the search directions of the method are not accurate enough. In this paper, one method based on the subspace optimization is proposed to improve its optimization performance. This method corrects the search directions of primal alternating direction method by using the energy function and a linear combination of the previous search directions. In addition, the convergence of the primal alternating direction method is proven under some weaker conditions. Thus the convergence of the corrected method could be easily obtained since it has same convergence with the primal alternating direction method. Numerical examples are given to show the performance of proposed method finally.

1. Introduction

Digital image restoration has a wide application in various areas including Navigation, Aerospace, and Biomedicine (see [13] and the references therein). In general, the relationship between the original image and the observed image iswhere is the additive noise and the spatial-invariant matrix represents the degradation system caused by problems such as motion blur, distortion radiation, and distortion wavelets in seismic imaging, with and being the number of rows and columns, respectively, when images are expressed as a matrix. In particular, matrix represents Block Toeplitz-plus-Hankel matrix (with Toeplitz-plus-Hankel blocks) or Block Toeplitz matrix (with Toeplitz blocks) when Neumann boundary condition and the zero boundary condition are used, respectively [4].

The objective of the image restoration is to estimate the original image according to some a priori knowledge about the degradation system , the additive noise , and the observed image . However, it often tends to be very ill-conditioned when the reverse process of model (1) is only used to get an ideal estimation . Thus one of the effective ways to solve these problems is to combine some a priori information of the original image and define the regularization solution; that is, is a minimizer of the following cost (energy) function:where is the measure of the difference between and . The regularization term embodies a priori information and a regularization parameter is used to control the tradeoff between the terms and .

In general, (the square of -norm of vector ) when the additive noise meets Gaussian distribution. (-norm of vector ) when the additive noise meets non-Gaussian distributions such as uniform and speckle.

The regularization term , where , is the potential function, is the difference operator which can be seen as a matrix and used to create the difference vector between th pixel and its neighboring pixels. Here with

In image restoration, the potential function plays a key role so that it is intensively studied in recent decades [520]. Two classes of regularization terms are well known. One is the Tikhonov class [57], for example, . In these cases, the minimum point of (2) can be easily solved due to differentiability. However, these methods tend to make restored images overly smooth and could not protect the clear boundaries.

The other class is based on total variation (TV) regularization [8]:In comparison to the first class, an obvious benefit is that the clear boundaries can be recovered well. Thus many effective methods are proposed to solve the TV deblurring problem (see [823]). However, they suffer from some numerical difficulties, since the TV regularization is nondifferentiable. In order to overcome the drawback, in [16], the TV regularization is modified as where . Obviously, the regularization is differentiable and many classical optimization methods can be applied [24, 25]. Unfortunately, experimental results show that must be small enough to keep the restoration quality, but the efficiency of algorithms will be reduced with smaller [16, 20].

By using the following modified TV regularization, a primal-dual active set method is proposed in [17], but it has no rotational invariant [8].

Recently, the alternating direction method [26] is used to solve the -TV and -TV image restoration models, and better performance has been obtained [1823, 27]. It divides original problem into some simple subproblems and solves them alternatively. The downside is that the search directions are not accurate enough which could reduce the performance of algorithms (see the analysis in Section 3).

Based on the considerations above, we will take the algorithms in [18, 20] as examples and present a method to improve the performance of the alternating direction method used in -TV and -TV models. By means of the subspace optimization [28], it corrects the search direction of the primal alternating direction method by using the energy function and a linear combination of the previous search directions. In addition, the convergence of the primal alternating direction method is proven under some weaker conditions. Thus the convergence of the corrected method could be easily obtained by the equivalence between them. Numerical examples are given to shown the performance of proposed method.

The outline of this paper is as follows. Some preliminaries are stated in Section 2. A -TV image restoration algorithm based on subspace optimization is proposed in Section 3. The convergence analysis and numerical examples are given in Sections 4 and 5, respectively. Finally, some concluding remarks are given in Section 6.

2. Preliminaries

In this section, we first propose some basic definitions and properties; then the alternating direction method to solve -TV model in [20] will be introduced. The alternating direction method in [18] (-TV model) will be omitted here since it is more simpler. But the comparison experimental results of the method and its corrected method still will be listed in Section 5.2. In the following, let and be the one-side difference matrix on the horizontal direction and vertical direction, respectively, , is the null space of matric , is the th iteration, and is the th element of vector .

Definition 1 (see [20]). An operator is called nonexpansive if for any , , we have Specially, is called -averaged nonexpansive if there exists some nonexpansive operator and such that , where is the identity operator.

Lemma 2 (see [20]). If , then the minimizer of is given by the following:We know from [20] that the operator is nonexpansive.

Lemma 3 (see [29]). Let be convex and semicontinuous, and Define such that for each ; then is -averaged nonexpansive.

Definition 4 (see [25]). A function (i) is said to be proper over a set if for at least one and for all and (ii) is said to be coercive over a set if for every sequence such that we have When , we say that is coercive on .

Lemma 5 (see [30]). Let be a closed, proper, and coercive function. Then the set of minima of over is nonempty and compact.
In [20], the -TV image restoration model isTo avoid the numerical difficulties caused by the nondifferentiability, two auxiliary variables and are used to cope with the nonsmooth terms and . Then (12) has been transformed aswhere the parameters and are used to ensure the closeness of and , and , respectively. When applied to (13), the alternating direction method isThe first step in (14) is equivalent to solving the nonsingular linear system: It can be solved directly; that is, To let computational cost be low, many classical optimization and numerical methods such as CG method and preconditioned iteration methods can be used to solve it [24, 25, 31]. As in [4, 18], we suppose that the Neumann boundary conditions are used and the blurring function is symmetric in this paper; then is a (block) Toeplitz-plus-Hankel matrix and it could be diagonalized by a discrete cosine transform matrix. Thus we could solve the nonsingular linear system utilizing lower cost, since the inverse of blurring matrix can be computed by using fast cosine transforms; please see [4, 18, 27] for more details.

Sincethe second step in (14) is equivalent to solving the minimizers of functions in the form of . We can know from Lemma 2 that the th element of its solution is

The third step in (14) is a -TV model with the blurring matrix is the identity matrix, that is, image denoising problem. Many methods can solve it effectively [1315], and Chambolle’ projection algorithm [13] is used in [20].

3. The Method Based on Subspace Optimization

Based on the principle of alternating direction method above, it is easy to know that the solution of (14) is different from the solution of (13), so the performance of solving the minima of (13) may be lowered. Thus a method based on subspace optimization [28] will be proposed to improve the optimization performance of (14) in this section.

Firstly, we introduce subspace optimization method. Let denote the solution corrected by subspace optimization method, and where = and =   , thenSupposing is selected as in (20), then the equation is valid. In fact, we can know from (21) that is a minimum point of the cost function: = ; thus must be true. It is helpful to improve the performance of original alternating direction method on every iteration, and the key problem is how to solve (21) now. To reduce the computational cost, let in this paper.

Next, the correction method of (14) will be proposed according to three cases.

Case 1 ( and or for ). Obviously, the function is smooth; we have where denotes the unit column vector with its th element being 1 and the others being zero, , , and . It is expensive to compute , so we let to improve the efficiency of the algorithm because is independent of variable now.
On the other hand, since is an optimum of the convex function , we can know from (21) and Taylor series expansion thatEquation (23) implies thatObviously, vector ; then Thus = + = = and = . The two equalities do not hold in image restoration since the dimension of vector is very huge. Therefore , andfrom (24).

Case 2 ( is nonempty). Now, is nonsmooth. Let then = and    when . Thus is set when .

Case 3 ( such that ). The term is not differential at this case; the Huber function is often used to replace it [25]. For simplicity, is selected when = .
In summary, the proposed algorithm is as follows.
Algorithm

Step 1. Initialize .

Step 2. Solve(i),(ii),(iii)

Step 3. Compute using (26), and let .

Step 4. If , then stop; otherwise go to Step 2.

4. Convergence Analysis

As known from (20) and (26), when and one of the sequences and is convergent, then the other sequence is also convergent, where = . Thus a simple proof of the convergence of algorithm (14) will be given below without the condition that and are asymptotically regular which is needed in [20], whereLemmas 6 and 7 below appeared in [18, 20], but the proofs given by us are simpler.

Lemma 6. Suppose ; then the sets of fixed points of operators and are nonempty, respectively.

Proof. It is easy to see that the objection function in (13) satisfies the conditions of Lemma 5 [18, 20]. Then has at least one minimizer that cannot be decreased by the alternating scheme (14). Thus Therefore , are the fixed points of and , respectively.

Lemma 7. The operators and are nonexpansive.

Proof. From Lemmas 2 and 3, we haveThis completes the proof.

Lemma 8. Let and be generated by (28); then and are bounded, where and are the subsequence of and , respectively.

Proof. For in (13), we have Since is the minimizer of , so Then Notice that , we get From Lemma 2, is nonexpansive; that is, and thus On the other hand, and we have Namely, is bounded. Similarly, one can verify that is also bounded.
Now we state and prove the convergence result of (14).

Theorem 9. Suppose ; the sequence generated by (14) with any initial point converges to a solution of (13).

Proof. From Lemma 6, let be any fixed point of ; thenfrom Lemma 7. Thus the sequence is bounded, and there exists a subsequence with . On the other hand, is bounded from Lemma 8; thus Let , then ; that is, is also a fixed point of . Similarly, to prove (41), one can also verify that the sequence is nonincreasing. Thus since .
Following the argument above, one can prove that the sequence generated by (14) with any initial point converges to a solution of (13).

Remark 10. If , then for all , where is a nonzero constant. Since matrix in image restoration, so is a nonzero vector. It follows that the assumption holds in general.

5. Experimental Results

In this section, some numerical results will be provided to show the performance of our method. As in [18, 20], the tested images are selected as Cameraman of 256 × 256 and Lena of 512 × 512. The algorithm in [20] and ours will be compared in Section 5.1 (-TV model). The algorithm in [18] and its corrected algorithm will be compared in Section 5.2 (-TV model). All the computational tasks are performed using MATLAB 2016a with Core(TM)2CPU with 2.83 GHz and 3.87 GB of RAM. The average value of ten tests would be selected.

The tested blurring function is chosen to be truncated 2D Gaussian function: Here three sets of parameters are chosen: the support being equal to ; the support being equal to ; the support being equal to .

In all runs, CPU time is used to compare the efficiency, signal-to-noise ratio (SNR), and peak signal-to-noise ratio (PSNR):which are used to measure the quality of the restored images. The stopping criterion of the algorithms should satisfies

5.1. Comparison Experiment for the -TV Model

In this subsection, a comparison of the algorithm in [20] with the proposed method is made under different non-Gaussian additive noises. Firstly, for the uniform noise, we let = 0.008, = 0.05, and = 0.2 as in [20]. The uniform random numbers appear in the intervals (0, 0.05), (0, 0.1), and (0, 0.2), respectively. Specially, [20] points out that the noise added to the blur image is independent and identically distributed, so the same set of parameters could be selected for different images when the same kind of noise is considered. This could reduce the computational cost of searching regularization parameters. For the speckle noise, as in [20, 23], let , = 2.5 × , the continuation scheme about will be used, easier subsequences with smaller can be solved quickly, and the later subproblems can also be solved relatively quickly with warm starts from previous solutions. The noise variance will be selected as 0, 0.01, and 0.05 respectively. To be fair, the same methods will be used to solve the primal alternating direction method in this paper.

Now, we select two cases that have significant improvement of the visual sense to illustrate our method’s restoration quality. Under different conditions, more image restoration results with uniform noise and speckle noise will be summarized in Tables 1 and 2, respectively. Figure 1(a) shows the original Camera image. Figure 1(b) shows the observed Camera image, where the noise is the uniform noise, the random numbers are in the interval (0, 0.2), and the support is equal to 9 × 9. Figure 1(c) shows the restored Camera image by the algorithm in [20]. Figure 1(d) shows the restored Camera image by our method. Figure 2(a) shows the original Lena image. Figure 2(b) shows the observed Lena image, where the noise is the speckle noise, the noise variance is 0.01, the support is equal to . Figure 2(c) shows the restored Lena image by the algorithm in [20]. Figure 2(d) shows the restored Lena image by our method.

From Figures 1 and 2, we see that the quality of images restored by our method is improved obviously compared to those restored by the method of [20] in the two cases. For the uniform noise and speckle noise, Tables 1 and 2 show that the SNR and PSNR of the images restored by our method both are higher than the results of [20] in most cases. For the same requirement of relative error, our proposed algorithm is clearly faster than the competing algorithm in [20]. Referring to Sections 2 and 3, the proposed algorithm uses subspace optimization method to correct the search direction, which could ensure is valid on the th iteration. So it is clear that the proposed method is more efficient. The analysis is verified by the experiments results above.

5.2. Comparison Experiment for the -TV Model

In the subsection, the algorithm in [18] and ours will be compared. Being different from [19, 20], [18] involves a fitting of the auxiliary variable to which has superior performance compared to fitting [18, 23, 32]. The additive noise is selected as Gaussian noise. Here four sets of the standard deviation of noise are chosen: 0.01, 0.05, 0.1, and 0.2. Same methods appearing in [18] will be used to solve the primal alternating direction method.

Similarly, we will first select two cases that have significant improvement of the visual sense to illustrate our method’s restoration quality. Figures 3(a) and 4(a) show the original images. Figures 3(b) and 4(b) show the observed images. Figure 3(c) shows the restored Camera image by algorithm in [18]; Figure 3(d) shows the restored Camera image by our method, where the standard deviation of noise is 0.1 and the support is equal to 9 × 9. Figure 4(c) shows the restored Lena image by algorithm in [18]; Figure 4(d) shows the restored Lena image by our method, where the standard deviation of noise is 0.2 and the support is equal to . Table 3 shows more image restoration results with the different support and standard deviation of noise.

As in Section 5.1, Figures 3 and 4 show that the quality of restoration images by using our method is better than those restored by the algorithm of [18] in the two cases. We also can know from Table 3 that the SNR and PSNR of the images restored by our method both are higher than the results of [18] in most cases. With the same reasons mentioned in -TV model, all computational time required by our method is significantly less than that required by the algorithm in [18]. So the performance of the algorithm in [18] also has been improved through the correction.

6. Concluding Remarks

Taking the -TV model in image restoration as an example, a modified method is proposed to overcome drawbacks of the primal alternating direction method in this paper. We have illustrated that our method about how to correct the search direction improves the optimization performance. In addition, the convergence of the primal alternating direction method has been proven under some weaker conditions, and thus the convergence of proposed method is easily obtained by the equivalence between them. The experimental results based on two models in [18, 20] show that the proposed method could enhance the quality of restored images in most cases and the efficiency of algorithms has been significantly improved. In fact, our method can be applied to many other cases optimized alternatively.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the Fundamental Research Funds for the Central Universities of Southwest University for Nationalities (no. 2015NZYQN30), the Key Fund Project of Sichuan Provincial Department of Education (no. 17ZA0414), and the National Science Foundation of China (no. 61273311). The authors are grateful to the author, Xiaoxia Guo, of [20] for providing programmes.