Abstract

In this paper, we propose a modified Polak–Ribière–Polyak (PRP) conjugate gradient method for solving large-scale nonlinear equations. Under weaker conditions, we show that the proposed method is globally convergent. We also carry out some numerical experiments to test the proposed method. The results show that the proposed method is efficient and stable.

1. Introduction

Solving nonlinear equations is an important problem which appears in various models of science and engineering such as computer vision, computational geometry, signal processing, computational chemistry, and robotics. More specifically, the subproblems in the generalized proximal algorithms with Bergman distances is a monotone nonlinear equations [1], and -norm regularized optimization problems can be reformulated as monotone nonlinear equations [2]. Due to its wide applications, the studies in the numerical methods for solving the monotone nonlinear equations have received much attention [310]. In this paper, we are interested in the numerical methods for solving monotone nonlinear equations with convex constraints:where is a continuous function and is a nonempty, closed, and convex set. The monotonicity of the mapping means that

The methods for solving monotone nonlinear equations (1) are closely relevant to the methods for solving the following optimization problems:

Notice that if is strictly convex, then is strictly monotone which means It is well known that the strictly convex function exists a unique solution , satisfying To sum up, if there is a convex function satisfying , then solving the optimization problems (3) is equivalent to solving monotone nonlinear equations (1). So, a natural idea to solve monotone nonlinear equations (1) is to use the existing efficient methods for solving optimization problems (3). There are many methods for solving optimization problems (3), such as the Newton method, quasi-Newton method, trust region method, and conjugate gradient method. Among these methods, the conjugate gradient method is a very effective method for solving optimization problems (3) due to their simplicity and low storage. A conjugate gradient method generates a sequence of iterates:where is the step length and direction is defined bywhere is a parameter and is the gradient of the objective function . The choice of determines different conjugate gradient methods [1117]. We are interested in the PRP conjugate gradient method in which the parameter is defined bywhere . Based on the idea of [18, 19], Zhang et al. [20] proposed a new modified nonlinear PRP method in which is defined bywhere , and are two constants. There is a mistake about the definition of . By this definition, we will not be able to prove Lemma 1 in [20]. It should be

There are many conjugate gradient methods for solving nonlinear equations (1). Zhang and Zhou [4] proposed a spectral gradient method by combining the modified spectral gradient and projection method, which can be applied to solve nonsmooth equations. Xiao and Zhou [10] extended the CG-DESCENT method to solve large-scale nonlinear monotone equations and extended this method to decode a sparse signal in compressive sensing. Dai and Zhu [21] proposed a derivative-free method for solving large-scale nonlinear monotone equations and proposed a new line search for the derivative-free method. Other related works can be found [3, 58, 10, 2230]. In this paper, we combined the projection method [3], the modified nonlinear PRP conjugate gradient method for unconstrained optimization [20] and the iterative method [10] and proposed a modified nonlinear conjugate gradient method for solving large-scale nonlinear monotone equations with convex constrains.

This paper is organized as follows. In Section 2, we propose a modified nonlinear PRP method for solving monotone nonlinear equations with convex constraints. Under reasonable conditions, we prove its global convergence. In Section 3, we make some improvement to the proposed method and give the convergence theorem of the improved method. In Section 4, we do some numerical experiments to test the proposed methods. The results show that our methods are efficient and promising. Furthermore, we use the proposed methods to solve practical problems in compressed sensing.

2. A Modified Nonlinear PRP Method

In this section, we develop a modified nonlinear PRP method for solving the nonlinear equations with convex constraints. Based on the modified nonlinear PRP method [20], we now introduce our method for solving (1). Inspired by (8), we define aswhere The parameter is computed aswhere and are two constants.

The lemma below shows a good property of The steps of the method are given in Algorithm 1.

Lemma 1. Let be generated by Algorithm 1. If , then there exists a constant such that

Proof. For we haveFor we obtainDenoteThen, we obtainLet ; then, inequality (11) is satisfied.
Next, we establish the global convergence of the proposed method. Without specification, we always suppose that the solution set of equation (1) is nonempty and the following assumption holds.

Assumption 1. (i)The mapping is Lipchitz continuous, and it means that the mapping satisfies(ii)The projection operator is nonexpansive, i.e.,

Lemma 2. Suppose that Assumption 1 holds and is a solution of (1), and the sequence and are generated by Algorithm 1. Then, the sequence , , and are bounded.

Initial. Given a small constant and constants . Choose an initial point . Let .
Step 1. Stop if .
Step 2. Compute (9) to get .
Step 3. Let satisfying
denote
Step 4. Compute
where
and is a projection operator, defined by .
Step 5. Let and go to Step 1.

Proof. We first show that is bounded. From the monotonicity of function , we haveIt is easy to see thatThe last inequality impliesIt obviously that the sequences is bounded, i.e., there is a constant such thatNext, we show that is bounded. Since is Lipchitz continuous, we obtainLet ; then, is bounded. That is,At last, we prove that the sequence is bounded. From (2) and (23) and Algorithm 1, we obtainSo, the following inequality holds:It implies that the sequence is bounded.

Lemma 3. Suppose that Assumption 1 holds, and the sequence and is generated by Algorithm 1. Then, there exists a constant such that

Proof. We first prove the right side of inequality (26). For , from (9), we haveFor , by the definition of and (21), we obtainBy the definition of (9) and the last inequality, we obtainLet ; then, we have
Now, we turn to prove the left side of the inequality. It follows from (11) thatTherefore, we have

Lemma 4. Suppose Assumption 1 holds; then, the step length satisfies

Proof. If the Algorithm 1 terminates in a finite number of steps, then there is a such that is a solution of equation (1) and . From now on, we assume that , for any . It is easy to see that from (11).
If , by the line search process, we know that does not satisfy Algorithm 1, that is,It follows from (11) and Assumption 1 thatFrom the last inequality and Lemma 3, we obtainHence, it holds that

Lemma 5. Suppose that Assumption 1 holds, and the sequence and are generated by Algorithm 1. Then, we have the following:

Proof. Noticed thatSince the function is continuous, and the sequence is bounded, so the sequence is bounded. That is, for all , there exists a positive constant , such that Then, we obtainSo, we have

Theorem 1. Suppose that Assumption 1 holds. The sequence is generated by Algorithm 1. Then, we have

Proof. Suppose that (41) does not hold; then, there exists such that, for any From (26) and the last inequality, it is easy to seeFrom (41) and (42), we obtainThe last inequality yields a contradiction with (37), so (41) is satisfied.

3. An Improvement

In this section, we make some improvement to the modified nonlinear PRP method proposed in Section 2. In Algorithm 1, we take the step length . Is there a better choice for ? This is our purpose to improve Algorithm 1. Under the condition of ensuring the convergence of the algorithm and the related good properties and results, we improve Algorithm 1 in order to get better numerical results.

From Algorithm 1, to make the inequality hold, we only need to satisfy

By solving the last inequality, we have

It is easy to see that is the minimum point of the function . This is the reason why Algorithm 1 takes . Under reasonable conditions, we hope to get a large step length than Algorithm 1. So, we obtain

Based on the above arguments, we propose an improved algorithm of Algorithm 1. In the improved algorithm, we make the step length:

Similar to the proof of Theorem 1, we have the following results.

Theorem 2. Suppose that Assumption 1 holds. The sequence is generated by Algorithm 2; then, we haveThe iterative process of the improved method is stated as follows.

Initial. Given a small constant and constants . Choose an initial point . Let .
Step 1. Stop if .
Step 2. Compute (9) to get .
Step 3. Let satisfying
denote
Step 4. Compute
where
Step 5. Let and go to Step 1.

4. Numerical Results

In this section, we do some numerical experiments to test the performance of the proposed methods. We implemented our methods in MATLAB R2020b and run the codes on a personal computer with 2.3 GHz CPU and 16 GB RAM.

We first solve Problems 1 and 2.

Problem 1. (see [8]). The mapping is taken as where

Problem 2. (see [4]). The mapping is taken as whereand
We set The stopping criterion of the algorithm is set to or the number of iteration reach to 500. The latter case means that the method is a failure for the test problems. We test both problems with the dimensions of variables , and . Start from different initial points, and we list all results in Tables 1 and 2. We compared the performance of the proposed methods with the classical Newton method and an efficient algorithm CGD [10] in the total number of iterations as well as the computational time. The meaning of each column is given below.‘Init’: the initial point’: the dimension of the problem‘Iter’: the total number of iterations‘Time’: the CPU time (in seconds) used for the method‘NM’: the Newton method‘CGD’: the conjugate gradient method in [10]‘MNPRP’: the modified nonlinear PRP method‘IMNPRP’: the improved modified nonlinear PRP methodThe results in Tables 1 and 2 show that our methods performs very well both in the number of iterations and CPU time. The IMNPRP performs best among these methods. It is worth noting that the number of iterations does not increase significantly as n increases. Hence, the proposed method is very suitable for solving large-scale problems. Because of the lack of memory, the dimension of the problems solved by the Newton method is no more than 100,000.
The following example is a signal reconstruction problem from compressed sensing.

Problem 3. (see [10]). Consider a typical compressive sensing scenario, where we aim to reconstruct a length-n sparse signal form m observations . In this test, the measurement b contains noise:where is the Gaussian noise distributed as with . The random is the Gaussian matrix which is generated by command randn (m, n) in Matlab. The merit functionwhere the value τ is forced to decrease as the measure in [31]. The iterative process starts at the measurement image, i.e., , and terminates when the relative change between successive iterates falls below , i.e.,where denotes the function value at . By the discussion in [10], we know that the -norm problem can transformed a monotone nonlinear equation. Hence, it can be solved by Algorithms 1 and 2.
Due to the storage limitations of the PC, we test a small size signal with , and the original contains randomly nonzero elements. The quality of restoration is measured by the mean of squared error (MSE) to the original signal , that is,where is the restored signal. We take the parameters , and in CGD, MNPRP, and IMNPRP.
In order to test the effectiveness of the proposed methods, we compare the proposed methods with the CGD method [10] and the solver SGCS which is specially designed to solve monotone equations for recovering a large sparse signal in compressive sensing. The results are listed in Figures 1 and 2.
It can be seen from Figures 1 and 2 that all methods have recovered the original sparse signal almost exactly. Among these methods, the IMNPRP method performs best.

5. Conclusions

In this paper, a modified conjugate gradient method and its improved method are proposed for solving the large-scale nonlinear equations. Under some assumptions, global convergence of the proposed methods are established. Numerical results show that the proposed methods are very efficient and competitive.

Data Availability

All data generated or analysed during this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was funded by the Education Department of Hunan Province (Grant no. 20C0559).