Abstract

Combining multivariate spectral gradient method with projection scheme, this paper presents an adaptive prediction-correction method for solving large-scale nonlinear systems of monotone equations. The proposed method possesses some favorable properties: (1) it is progressive step by step, that is, the distance between iterates and the solution set is decreasing monotonically; (2) global convergence result is independent of the merit function and its Lipschitz continuity; (3) it is a derivative-free method and could be applied for solving large-scale nonsmooth equations due to its lower storage requirement. Preliminary numerical results show that the proposed method is very effective. Some practical applications of the proposed method are demonstrated and tested on sparse signal reconstruction, compressed sensing, and image deconvolution problems.

1. Introduction

Considering the problem to find solutions of the following nonlinear monotone equations: where is a continuous and monotone, that is, for all .

Nonlinear monotone equations arise in many practical applications such as ballistic trajectory computation [1] and vibration systems [2], the first-order necessary condition of the unconstrained convex optimization problem, and the subproblems in the generalized proximal algorithms with Bregman distances [3]. Moreover, we can convert some monotone variational inequality into systems of nonlinear monotone equations by means of fixed point maps or normal maps [4] if the underlying function satisfies some coercive conditions. Solodov and Svaiter [5] proposed a projection method for solving (1). A nice property of the projection method is that the whole sequence of iterates is always globally convergent to a solution of the system without any additional regularity assumptions. Moreover, Zhang and Zhou [6] presented a spectral gradient projection (SG) method for solving systems of monotone equations which combines a modified spectral gradient method and projection method. This method is shown to be globally convergent if the nonlinear monotone equations is Lipschitz continuous. Xiao et al. [7] proposed a spectral gradient method to minimize a nonsmooth minimization problem, arising from spare solution recovery in compressed sensing, consisting of a least-squares data-fitting term and a -norm regularization term. This problem is firstly formulated as a convex quadratic program (QP) problem and then reformulated to an equivalent nonlinear monotone equation. Furthermore, Yin et al. [8] developed a nonlinear conjugate gradient method for -norm regularization problems in compressed sensing. Yu [9, 10] extended the spectral gradient method and conjugate gradient-type method to solve large-scale nonlinear system of equations, respectively. Recently, the authors in [11] proposed a multivariate spectral gradient projection method for solving nonlinear monotone equations with convex constraints. Numerical results show that multivariate spectral gradient method (MSG) could improve its performance very well.

Following this line, based on multivariate spectral gradient method (MSG), we present an adaptive prediction-correction method for solving nonlinear monotone equations (1) in the next section. Its global convergence result is established, which is independent of the merit function and Lipschitz continuity. Section 3 presents some numerical experiments to demonstrate and test its practical performance on compressed sensing and image deconvolution problems. Finally, we have a conclusion section.

2. Adaptive Prediction-Correction Method

Considering the projection method [5] for solving nonlinear monotone equations (1), suppose that we have obtained a direction . By performing some kind of line search procedure along the direction , a point can be computed such that By the monotonicity of , for any such that , we have Thus, the hyperplane strictly separates the current iterate from solutions of the systems of monotone equations. Once we get the separating hyperplane, the next iterate is computed by projecting on it.

Recalling the multivariate spectral gradient (MSG) method [12] for minimization problem , its iterative formula is defined by , where is the gradient of at and is obtained by minimizing with respect to , where . In particular, when has positive definite diagonal Hessian matrix, multivariate spectral gradient method will be convergent quadratically [12].

Let the column of and denoted by and , respectively. Combining multivariate spectral gradient method with projection scheme, we can present an adaptive prediction-correction method for solving monotone equations (1) as follows.

Algorithm 1 (multivariate spectral gradient (MSG) method). Given , , , , , . Set .

Step 1. If , stop.

Step 2. (a) If , set .
(b) else if , then set ; otherwise set for , where , .
(c) else if or , set for .
Set .

Step 3 (prediction step). Compute step length , set , where with being the smallest nonnegative integer such that

Step 4 (correction step). Compute

Step 5. Set and go to Step 1.

By using multivariate spectral gradient method, we obtain prediction sequence , and then we get correction sequence via projection. It follows from (17) that will be more close to the solution than , that is, the sequence makes progress iterate by iterate. From Step 2(c), we have

In what follows, we assume that for all ; otherwise we have got the solution of the problem (1). The following lemma states that Algorithm 1 is well defined.

Lemma 2. There exists a nonnegative number satisfying (6) for all .

Proof. Suppose that there exists a such that (6) is not satisfied for any nonnegative integer , that is, Let and using the continuity of yields From Steps 1, 2, and 5, we have Thus, The last inequality contradicts (10). Hence the statement is proved.

Lemma 3. Let and be any sequence generated by Algorithm 1. Suppose that is monotone and that the solution set of (1) is not empty, then and are both bounded. Furthermore, it holds that

Proof. From (6), we have Let be an arbitrary point such that . Taking account of the monotonicity of , we have From (7), (14), and (16), it follows that Hence the sequence is decreasing and convergent; moreover, the sequence is bounded. Since the is continuous, there exists a constant such that By the Cauchy-Schwarz inequality, the monotonicity of and (15), we have From (18) and (19), we obtain that is also bounded. It follows from (17) and (18) that which implies From (7), using the Cauchy-Schwarz inequality, we obtain that Thus .
The proof is complete.

Now we can establish the global convergence of Algorithm 1.

Theorem 4. Let be generated by Algorithm 1; then converges to an such that .

Proof. Since , it follows from Lemma 3 that From (8) and (18), it holds that is bounded.
Now we consider the following two possible cases:(i).(ii).
If (i) holds, from (8), we have . By the continuity of and the boundedness of , it is clear that the sequence has some accumulation point such that . From (17), we also have that the sequence converges. Therefore, converges to .
If (ii) holds, from (8), we have . By (23), it holds that By the line search rule, we have for all sufficiently large, will not satisfy (6). This means Since the sequences , are bounded, we choose a subsequence, let in (25), we obtain that where are limits of corresponding subsequences. On the other hand, by (8), it holds that which contradicts (26). Hence, is impossible.
The proof is complete.

3. Numerical Experiments

In this section, we report some preliminary numerical experiments to test our algorithms with comparison to spectral gradient projection method [6]. Firstly, in Section 3.1 we test these algorithms on solving nonlinear systems of monotone equations. Secondly, in Section 3.2, we apply HSG-V algorithm to solve -norm regularization problem arising from compressed sensing. All of numerical experiments were performed under Windows XP and MATLAB 7.0 running on a personal computer with an Intel Core 2 Duo CPU at 2.2?GHz and 2?GB of memory.

3.1. Test on Nonlinear Systems of Monotone Equations

We test the performance of our algorithms for solving some monotone equations (see details in the appendix). The termination condition is . The parameters are specified as follows. For MSG method, we set . In Step 2, the parameter is chosen in the following way:

Firstly, we test the performance of the MSG method on the Problem 1 with , the initial point . Figure 1 displays the performance of MSG method for Problem 1 which indicates that prediction sequences are better than correction sequences at most time. Taking this into account, we relax the MSG method such that Step 4 in Algorithm 1 is replaced by the following: ?if mod? ,?eslse? ,?end.

In this case, we refer to this modification as “MSG-V” method. When , the above algorithm will reduce to Algorithm 1. The performance of those methods on the Problem (1) is shown in Figure 1, from which we can see that the MSG-V method is preferable quite frequently to the SG method while it also outperforms the MSG method. Furthermore, motivated to accelerate the performance of MSG-V method, we present a hybrid spectral gradient (HSG-V) algorithm. The main idea of the HSG-V algorithm is to run MSG-V algorithm when for ; otherwise switch to spectral gradient projection (SG) method.

And then we compare the performance of MSG method, MSG-V method, and HSG-V method with the spectral gradient projection (SG) method in [6] on test problems with different initial points. We set , , in the spectral gradient projection (SG) method in [6], and for MSG-V method and HSG-V method.

Numerical results are shown in Tables 1, 2, 3, 4, 5, and 6 with the form NI/NF/T/BK, where we report the dimension of the problem (), the initial points (Init), the number of iteration (NI), the number of function evaluations (NF), and the CPU time (Time) in seconds and the number of backtracking (BK). The symbol “F” denotes that the method fails for this test problem, or the number of the iterations is greater than 10000.

As we can see from Tables 16 that the HSG-V algorithm is preferable quite frequently to the SG method and also outperforms the MSG algorithm and MSG-V algorithm, since it can solve about and of the problems with the best time and the smallest number of function evaluations, respectively. We also find that the SG algorithm seems more sensitive to the initial points.

Figure 2 shows the performance of these algorithms relative to the number of function evaluations and CPU time, respectively, which were evaluated using the profiles of Dolan and Moré [13]. That is, for each algorithm, we plot the fraction of problems for which the method is within a factor of the smallest number of function evaluations/CPU time. Clearly, the left side of the figure gives the percentage of the test problems for which a method is the best one according to the number of function evaluations or CPU time, respectively. As we can see from Figure 2, “HSG-V” algorithm has the best performance.

3.2. Test on -Norm Regularization Problem in Compressed Sensing

There has been considerable interest in solving the -norm regularized least-square problem where is a linear operator, is an observation, and is a nonnegative parameter. Equation (29) mainly appears in compressed sensing: an emerging methodology in digital signal processing and has attracted intensive research activities over the past few years. Compressed sensing is based on the fact that if original signal is sparse or approximately sparse in some orthogonal basis, then an exact restoration can be produced by solving (29).

Recently, Figueiredo et al. [14] proposed gradient projection method for sparse reconstruction (GPSR). The first key step of GPSR method is to express (29) as a quadratic program. For any it can be formulated as , , , where , and for with . We thus have , where is the vector consisting of ones. Hence (29) can be rewritten as the following quadratic program: Furthermore, from [14], (30) can be written in following form where It is obvious that is a positive semidefinite matrix, hence, (30) is a convex QP problem. Figueiredo et al. [14] proposed a gradient projection method with BB step length for solving this problem.

Xiao et al. [7] indicated that the QP problem (30) is equivalent to the linear complementary problem: find such that It is obvious that is a solution of (33) if and only if it is a solution of the following nonlinear systems of equation The function is vector valued, and the “min” is interpreted as componentwise minimum. Xiao et al. [7] proved that is monotone. Hence, (34) can be solved effectively by the HSG-V algorithm.

Firstly, we consider a typical CS scenario that goal is to reconstruct a length- sparse signal from observations. We measure the quality of restoration by means of squared error (MSE) to the original signal , that is, where is the restored signal. We test a small size signal with , , and the original contains randomly nonzero elements. is the Gaussian matrix which is generated by command in MATLAB. In this test, the measurement is usually contaminated by noise, that is, where is the Gaussian noise distributed as . The parameters are taken as , , ?, , , ? is forced in decrease as the measure of [14]. To get better quality estimated signals, the process is terminated when the relative change of the objective function is below , that is, where denotes the function value at .

Figures 3 and 4 report the results of HSG-V for a signal sparse reconstruction from its limited measurement. Comparing the first and last plot in Figure 3, we can find that the original sparse signal is restored almost exactly from the limited measurement. From the right plot in Figure 4, we observe that all the blue dots are circled by the red circles, which shows that the original signal has been found almost exactly. All together, this simple experiment shows that HSG-V algorithms perform well, and it is an efficient method to denoise sparse signals.

In the next experiment, we compare the performance of our algorithm with the SGCS algorithm for image deconvolution, in which is a partial DWT matrix whose rows are chosen randomly from DWT matrix. To measure the quality of restoration, we use the SNR (signal to noise ratio) defined as . Figure 5 shows the original test images, and Figure 6 shows the restoration results by the SGCS and HSG-V algorithm, respectively. These results show that the HSG-V algorithm can restore blurred image quite well and obtain better quality reconstructed images in an efficient manner.

4. Conclusion

In this paper, we develop an adaptive prediction-correction method for solving nonlinear monotone equations. Under some assumptions, we establish its global convergence. Base on the prediction-correction method, an efficient hybrid spectral gradient (HSG-V) algorithm is proposed, which is composite of MSG-V, algorithm and SG algorithm. Numerical results show that the HSG-V algorithm is preferable and outperforms the MSG, MSG-V and SG algorithm. Moreover, HSG-V algorithm is applied to solve -norm regularized problems arising from sparse signal reconstruction. Numerical experiments show that HSG-V algorithm works well, and it provides an efficient approach for compressed sensing and image deconvolution.

Appendix

The Test Problems

In this appendix, we list the test functions and the associated initial guess as follows.

Problem 1. , .

Problem 2. , .

Problem 3. is given by and
It is noticed that Problems 1 and 3 are smooth at , while Problem 2 is nonsmooth (Table 7).

Acknowledgments

This work was partly supported by the National Natural Science Foundation of China (no. 11001060, 61262026, 81000613, 81101046), the JGZX Programme of Jiangxi Province (20112BCB23027), and the Science and Technology Programme of Jiangxi Education Committee (LDJH12088).