Research Article | Open Access

Volume 2019 |Article ID 5261830 | 17 pages | https://doi.org/10.1155/2019/5261830

# A Modified Spectral PRP Conjugate Gradient Projection Method for Solving Large-Scale Monotone Equations and Its Application in Compressed Sensing

Accepted24 Mar 2019
Published08 Apr 2019

#### Abstract

In this paper, we develop an algorithm to solve nonlinear system of monotone equations, which is a combination of a modified spectral PRP (Polak-Ribière-Polyak) conjugate gradient method and a projection method. The search direction in this algorithm is proved to be sufficiently descent for any line search rule. A line search strategy in the literature is modified such that a better step length is more easily obtained without the difficulty of choosing an appropriate weight in the original one. Global convergence of the algorithm is proved under mild assumptions. Numerical tests and preliminary application in recovering sparse signals indicate that the developed algorithm outperforms the state-of-the-art similar algorithms available in the literature, especially for solving large-scale problems and singular ones.

#### 1. Introduction

In many fields of sciences and engineering, solution of a nonlinear system of equations is a fundamental problem. For example, in [1, 2], both a Nash economic equilibrium problem and a signal processing problem were formulated into a nonlinear system of equations. Owing to complexity, in the past five decades, numerous algorithms and some software packages in virtue of those powerful algorithms have been developed for solving the nonlinear system of equations. See, for example, [1, 316] and the references therein. Nevertheless, in practice, no any algorithm can efficiently solve all the systems of equations arising from sciences and engineering. It is significant to develop a specific algorithm to solve the problems with different analytic and structural features [17, 18].

In this paper, we consider the following nonlinear system of monotone equations:where is a continuous and monotone function; that is to say, satisfiesIt has been shown that the solution set of problem (1) is convex if it is nonempty . In addition, throughout the paper, the space is equipped with the Euclidean norm and the inner product , for .

Aiming at solution of problem (1), many efficient methods were developed recently. Only by incomplete enumeration, we here mention the trust region method , the Newton and the quasi-Newton methods [46, 19], the Gauss-Newton methods [7, 8], the Levenberg-Marquardt methods , the derivative-free methods and its modified versions [916, 2327], the derivative-free conjugate gradient projection method , the modified PRP (Polak-Ribière-Polyak) conjugate gradient method , the TPRP method , the PRP-type method , the projection method , the FR-type method , and the modified spectral conjugate gradient projection method . Summarily, the spectral gradient methods and the conjugate gradient methods are more popular in solving a large-scale nonlinear system of equations than the Newton and the quasi-Newton methods. One of the former’s advantages lies in that there is no requirement of computing and storing the Jacobian matrix or its approximation.

Specifically, Li introduced a class of methods for large-scale nonlinear monotone equations in , which include the SG-liked method, the MPRP method, and the TPRP method. Chen  proposed a PRP method for large-scale nonlinear monotone equations. A descent modified PRP method and FR-type methods were presented in [9, 11], respectively. Liu and Li proposed a projection method for convex constrained monotone nonlinear equations in . Two derivative-free conjugate gradient projection methods were presented for such a system in . Three extensions of conjugate gradient algorithms were developed in , respectively. Based on the projection technique in , Wan and Liu proposed a modified spectral conjugate gradient projection method (MSCGP) to solve a nonlinear monotone system of symmetric equations in . Then, in , MSCGP was successfully applied into recovering sparse signals and restoring blurred images.

It is noted that the main idea of [13, 23] is to construct a search direction by projection technique such that it is sufficiently descent. In virtue of derivative-free and low storage properties, numerical experiments indicated that the developed algorithm in  is more efficient to solve large-scale nonlinear monotone systems of equations than the similar ones available in the literature.

In , Yang et al. proposed a modified spectral PRP conjugate gradient method for solving unconstrained optimization problem. It was proved that the search direction at each iteration is a descent direction of objective function and global convergence was established under mild conditions. Our research interest in this paper is to study how to extend this method to solution of problem (1). Specifically, we should address the following issues:

(1) Without need of derivative information of the function , how to determine the spectral and conjugate parameters to get a sufficiently descent search direction at each iteration?

(2) To ensure global convergence of algorithm, how to choose an appropriate step length for the given search direction? Particularly, monotonicity of should be utilized to design a new iterate scheme?

(3) What about the numerical performance of new iteration scheme? Especially, whether it is more efficient or not than the similar algorithms available in the literature.

Note that a new line search rule was proposed in  for solving nonlinear monotone equations with convex constraints, and it was shown that, in virtue of this line search, the developed algorithm has good numerical performance. However, the presented line search is involved with choice of a weight. Since it may be difficult to choose an appropriate weight in the practical implementation of algorithm, we attempt to overcome this difficulty in this paper.

Summarily, we intend to propose a modified spectral PRP conjugate gradient derivative-free projection method for solving large-scale nonlinear equations. Global convergence of this method will be proved, and numerical tests will be conducted by implementing the developed algorithm to solve benchmark large-scale test problems and to reconstruct sparse signals in compressive sensing.

The rest of this paper is organized as follows. In Section 2, we first state the idea to propose a new spectral PRP conjugate gradient method. Then, a new algorithm is developed. Global convergence is established in Section 3. Section 4 is devoted to numerical experiments. Preliminary application of the algorithm is presented in Section 5. Some conclusions are drawn in Section 6.

#### 2. Development of New Algorithm

In this section, we will state how to develop a new algorithm in detail.

##### 2.1. Projection Method

Generally, to solve (1), we need to construct an iterative format as follows:where is called a step length and is a search direction. Let . If satisfiesthen a projection method can be obtained for solving Problem (1). Actually, by monotonicity of , it holds thatfor any solution of (1), . With such a , we define a hyperplane:From (4) and (5), it is clear that strictly separates the iterate point from the solution . Thus, the projection of onto is closer to than . Consequently, the iterative formatis referred to as the projection method proposed in . Both analytic properties and numerical results have shown efficiency and robustness of the projection-based algorithms for monotone system of equations [10, 13, 14]. In this paper, we intend to propose a new spectral conjugate gradient method also in virtue of the above projection technique.

##### 2.2. A Modified Spectral PRP Conjugate Gradient Method

In the projection method (7), it is noted that must satisfy (4). That is to say, should be a search direction satisfyingVery recently, Wan et al.  proposed a modified spectral conjugate gradient projection method for solving nonlinear monotone symmetric equations, where was chosen byand and are computed byrespectively, , and . It was proved in  that given by (9) and (10) is sufficiently descent and satisfies .

Note that a modified spectral PRP conjugate gradient method was proposed for solving unconstrained optimization problems in . Similar to the idea in , we can extend the method in  to solution of problem (1). Specifically, we compute and in (9) byrespectively. Although (11) gives different choices of and from (10), we can also prove the following result.

Proposition 1. Let be given by (9) and (11). Then, for any , the following equality holds:

Proof. For , we haveFor , we haveWe now prove that ifholds for (), then (12) also holds for .
Actually,where the forth equality follows condition (15). Consequently, by mathematical induction, (12) holds for any .

Proposition 1 ensures that the idea of projection method can be incorporated into design of iteration schemes to solve (1) as the search direction is determined by (9) and (11).

##### 2.3. Modified Line Search Rule

Since it is critical to choose an appropriate step length to improve the performance of the iterate scheme (3), as well as determination of search directions, we now present an inexact line search rule to determine in (3).

Very recently, Ou and Li  presented a line search rule as follows: find a nonnegative step length such that the following inequality is as follows:where is a fixed search direction, is a given initial step length, and are two given constants, and is specified byIn , it was required that in (18) satisfies . Clearly, and are the weights for the values 1 and , respectively. In the practical implementation, it is may be difficult to choose an appropriate . To overcome this difficulty, we choose in (18) byIt is sure that, for the new method as a combination of (9), (11), (17), and (19), we need to establish its convergence theory and to further test its numerical performance.

Remark 2. In , to ensure that well-defined the line search (17) is well-defined, it is assumed that satisfieswhere is a given constant. By Proposition 1, it is clear that chosen by (9) and (11) satisfies (20) as .

##### 2.4. Development of New Projection-Based Algorithm

With the above preparation, we are in a position to develop an algorithm to solve problem (1) by combining the projection technique and the new methods to determine a search direction and a step length.

We now present the computer procedure of Algorithm 1.

Remark 3. Since Algorithm 1 does not involve computing the Jacobian matrix of or its approximation, both information storage and computational cost of the algorithm are lower. In virtue of this advantage, Algorithm 1 is helpful to solution of large-scale problems. In next section, we will prove that Algorithm 1 is applicable even if is nonsmooth. Our numerical tests in Section 4 will further show that Algorithm 1 can find a singular solution of problem (1) (see problem 5 and Table 5).

Remark 4. Compared with the algorithm developed in , problem (1) is not assumed to be a symmetric system of equations.

#### 3. Convergence

In this section, we are going to study global convergence of Algorithm 1.

Apart from different choices of search direction and step length, Algorithm 1 can be treated as a variant of the projection algorithm in . So, similar to some critical points of establishing global convergence in , we attempt to prove that Algorithm 1 is globally convergent. Very recently, locally linear convergence was proved in  for some PRP-type projection methods.

We first state the following mild assumptions.

Assumption 5. The function is monotone on .

Assumption 6. The solution set of problem (1) is nonempty.

Assumption 7. The function is Lipschitz continuous on ; namely, there exists a positive constant such that for all ,

Under these assumptions, we can prove that Algorithm 1 has the following nice properties.

Lemma 8. Let be a sequence generated by Algorithm 1. If Assumptions 5, 6, and 7 hold, then
(1) for any , such that ,(2) The sequence is bounded.
(3) If is a finite sequence, then the last iterate point is a solution of problem (1); otherwise,and(4) The sequence is bounded. Hence, there exists a constant such that .

Proof. (1) Let be any point such that . Then, by monotonicity of , we haveFrom (7), it is also easy to verify that is the projection of onto the halfspace:Thus, it follows from (25) that belongs to this halfspace. From the basic properties of projection operator , we know thatConsequently,The desired result (22) is directly obtained from (28).
(2) From (28), it is clear that the sequence is nonnegative and decreasing. Thus, is a convergent sequence. It is concluded that is bounded.
(3) From (28), we knowThus,Since the sequence is bounded, the series is convergent. Consequently,The third result has been proved.
(4) For any , by Lipschitz continuity, we haveSince is convergent, we conclude that is bounded. Consequently, for all , there exists a constant such that .

Lemma 8 indicates that, for the sequence generated by Algorithm 1, the sequence is decreasing and convergent, and the sequence is bounded, where is any solution of problem (1).

Lemma 9. Suppose that Assumptions 5, 6, and 7 hold. Let be a sequence generated by Algorithm 1. If there exists a constant such that, for any positive integer ,Then, the sequence of directions is bounded; i.e., there exists a constant such that, for any positive integer ,

Proof. From (9), (11), (12), and the results of Lemma 8, it follows thatFrom (24), we know that there exist a positive integer and a positive number () such that, for all , Hence,Let . TakeThen, holds for any positive integer .

Lemma 10. Suppose that Assumptions 5, 6, and 7 hold. Let and be two sequences generated by Algorithm 1. Then, the line search rule (17) of Step 2 in Algorithm 1 is well-defined.

Proof. Our aim is to show that the line search rule (17) terminates finitely with a positive step length . By contradiction, suppose that, for some iterate indexes such as , condition (17) does not hold. As a result, for all ,From (18) and the termination condition of Algorithm 1, it follows that, for all ,By taking the limit as in both sides of (39), we haveEquations (41) contradicts the fact that for all . That is to say, the line search rule terminates finitely with a positive step length ; i.e., the line search step of Algorithm 1 is well-defined.

With the above preparation, we are now state the convergence result of Algorithm 1.

Theorem 11. Suppose that Assumptions 5, 6, and 7 hold. Let be a sequence generated by Algorithm 1. Then,

Proof. For the sake of contradiction, we suppose that the conclusion is not true. Then, by the definition of inferior limit, there exists a constant such that, for any ,Consequently, fromit follows that for any .
From (7), (17), and (40), we getCombining (24) and (45), we obtainSince for any , we haveClearly, does not satisfy in (17). It says thatBy Lemmas 8 and 9, we know that the two sequences and are bounded. Without loss of generality, we choose a subset such thatTaking the limit in the two sides of (48) as (), it holds thatOn the other hand, from (43), we know thatBy taking the limit in the two sides of (51) for , we getIt contradicts (50). Thus, the proof of Theorem 11 has been completed.

Remark 12 (only with being generated by (19)). As is determined by (18), the proofs are similar.

Since the proof of Theorem 11 does not involve differentiability of , let alone nonsingularity of its Jacobian matrix, we know that the following result holds.

Corollary 13. For any nonsmooth or singular function , let be a sequence generated as Algorithm 1 is used to solve problem (1). Under Assumptions 5, 6, and 7, it holds that

It should be pointed out that the global convergence of Algorithm 1 depends on assumption on monotonicity of . For nonmonotonic function , Algorithm 1 may be not applicable.

#### 4. Numerical Experiments

In this section, by numerical experiments, we are going to study the effectiveness and robustness of Algorithm 1 for solving large-scale system of equations.

We first collect some benchmark test problems available in the literature.

Problem 14 (see ). The elements of are given by

Problem 15 (see ). The elements of are given by

Problem 16 (see ). The elements of are given by

Problem 17 (see ). The elements of are given byClearly, problem 4 is a linear system of equations, which is used to test Algorithm 1 in this special case.

Problem 18 (see ). The elements of are given by

Problem 19 (see ). The elements of are given by

Clearly, apart from problem 4, all the others are nonlinear system of equations. Problem 15 is nonsmooth at point . The size of all the test problems is variable. If this size is larger than 1000, the problem can be regarded to be large-scale. We solve all the test problems with different sizes by Algorithm 1, especially in comparison with the existing seven similar algorithms, such as those developed very recently in [10, 1315, 28].

All the algorithms are coded in MATLAB R2010b and run on a personal computer with a 2.2GHZ CPU processor, 8GB memory, and Windows 10 operation system. The relevant parameters of algorithms are specified by where . The termination condition is .

Numerical performance of all the algorithms is reported in Tables 1, 2, and 3. Table 1 shows the numerical performance of all the eight algorithms with the fixed initial points: for problems 1, 5, and 6; for problems 2 and 4; for problem 3. Table 2 demonstrates the numerical performance of all the eight algorithms with initial points randomly generated by Matlab’s Code “rand(n,1)”. Table 3 shows the numerical performance of our algorithm with dimension of 1000000.

 Problem Dim Method CPU-time Ni Nf P1 10000 MPPRP-M 0.049106 19 60 MPPRP-W 0.083986 28 132 MSDFPB 0.207396 49 357 PRP 0.210513 57 373 MPRP 0.225433 49 357 TPRP 0.205193 49 357 DFPB1 0.208191 52 360 DFPB2 0.180871 41 333 MHS 0.201207 49 357 P1 20000 MPPRP-M 0.097751 20 63 MPPRP-W 0.202214 34 187 MSDFPB 0.538910 63 522 PRP 0.601340 72 544 MPRP 0.578528 63 522 TPRP 0.567042 63 522 DFPB1 0.564317 67 534 DFPB2 0.538403 56 503 MHS 0.568248 63 522 P1 50000 MPPRP-M 0.189582 20 63 MPPRP-W 0.776513 47 317 MSDFPB 2.349394 93 913 PRP 2.426823 101 926 MPRP 2.373229 93 913 TPRP 2.32399 93 913 DFPB1 2.421556 97 925 DFPB2 2.364173 86 894 MHS 2.601235 93 913 P1 100000 MPPRP-M 0.519087 21 66 MPPRP-W 2.762991 61 478 MSDFPB 9.002493 127 1384 PRP 9.108308 135 1402 MPRP 9.448788 127 1384 TPRP 8.986777 127 1384 DFPB1 9.100524 130 1391 DFPB2 8.819755 119 1360 MHS 9.419715 127 1384 P2 10000 MPPRP-M 0.054535 19 60 MPPRP-W 0.088811 28 132 MSDFPB 0.217590 49 357 PRP 0.218645 57 373 MPRP 0.226537 49 357 TPRP 0.226429 49 357 DFPB1 0.210470 52 360 DFPB2 0.204578 41 333 MHS 0.234442 49 357 P2 20000 MPPRP-M 0.089917 20 63 MPPRP-W 0.198822 34 187 MSDFPB 0.563358 63 522 PRP 0.604772 72 544 MPRP 0.670294 63 522 TPRP 0.677819 63 522 DFPB1 0.679165 67 534 DFPB2 0.607357 56 503 MHS 0.589872 63 522 P2 50000 MPPRP-M 0.191132 20 63 MPPRP-W 0.720108 47 317 MSDFPB 2.248779 93 913 PRP 2.464597 101 926 MPRP 2.353173 93 913 TPRP 2.342777 93 913 DFPB1 2.363392 97 925 DFPB2 2.279782 86 894 MHS 2.384060 93 913 P2 100000 MPPRP-M 0.516081 21 66 MPPRP-W 2.734834 61 478 MSDFPB 9.004299 127 1384 PRP 8.763470 135 1402 MPRP 8.647855 127 1384 TPRP 8.626593 127 1384 DFPB1 8.626593 130 1391 DFPB2 8.476836 119 1360 MHS 8.905454 127 1384 P3 5000 MPPRP-M 0.077881 22 69 MPPRP-W 0.955814 127 1360 MSDFPB 2.155000 242 3141 PRP 2.346950 251 3167 MPRP 2.200878 242 3141 TPRP 2.204338 242 3141 DFPB1 2.246450 245 3148 DFPB2 2.209059 234 3117 MHS 2.236389 242 3141 P3 10000 MPPRP-M 0.145481 23 72 MPPRP-W 2.590862 174 2059 MSDFPB 6.749566 337 4751 PRP 6.892930 346 4767 MPRP 6.843273 337 4751 TPRP 6.885266 337 4751 DFPB1 7.222119 341 4763 DFPB2 6.881919 330 4732 MHS 6.992060 337 4751 P3 15000 MPPRP-M 0.190225 23 72 MPPRP-W 4.894470 212 2656 MSDFPB 12.242514 411 6059 PRP 13.494894 420 6078 MPRP 13.547676 411 6059 TPRP 12.255626 411 6059 DFPB1 12.473808 415 6071 DFPB2 12.734751 404 6040 MHS 12.425026 411 6059 P3 20000 MPPRP-M 0.239615 23 72 MPPRP-W 7.505229 242 3126 MSDFPB 18.305460 472 7144 PRP 19.195911 481 7169 MPRP 19.482789 472 7144 TPRP 19.011112 472 7144 DFPB1 19.165800 475 7151 DFPB2 19.428512 464 7120 MHS 18.928398 472 7144 P4 10000 MPPRP-M 0.061549 16 102 MPPRP-W 0.113153 19 131 MSDFPB 0.289696 68 555 PRP 0.187397 39 391 MPRP 0.233670 48 462 TPRP 0.358863 89 658 DFPB1 0.657537 185 1142 DFPB2 0.213256 40 422 MHS 0.414446 62 803 P4 20000 MPPRP-M 0.125760 17 108 MPPRP-W 0.256772 31 203 MSDFPB 0.703061 79 730 PRP 0.558050 52 565 MPRP 0.663444 59 635 TPRP 0.822848 100 833 DFPB1 1.442003 203 1352 DFPB2 0.565771 51 596 MHS 0.674281 60 724 P4 50000 MPPRP-M 0.295365 17 108 MPPRP-W 0.961757 38 273 MSDFPB 2.972641 102 1109 PRP 2.655601 73 943 MPRP 2.738925 81 1007 TPRP 3.209823 122 1207 DFPB1 4.783178 234 1771 DFPB2 2.644189 75 981 MHS 6.106110 158 2717 P4 100000 MPPRP-M 0.768978 17 108 MPPRP-W 3.113887 45 357 MSDFPB 10.575505 129 1568 PRP 9.164701 101 1406 MPRP 9.756561 108 1465 TPRP 11.151034 146 1651 DFPB1 16.812871 275 2301 DFPB2 9.292510 99 1423 MHS 21.792651 179 3305 P5 100 MPPRP-M 0.012253 18 77 MPPRP-W 0.018540 62 555 MSDFPB 0.049702 129 1522 PRP 0.033290 144 1558 MPRP 0.033017 139 1560 TPRP 0.034758 128 1518 DFPB1 0.026313 150 1601 DFPB2 0.029466 131 1538 MHS 0.040165 133 1539 P5 500 MPPRP-M 0.016447 22 93 MPPRP-W 0.231399 544 9066 MSDFPB 0.772128 1313 25232 PRP 0.727796 1332 25274 MPRP 0.585951 1372 25394 TPRP 0.577368 1315 25238 DFPB1 0.578794 1337 25315 DFPB2 0.589438 1317 25247 MHS 0.595316 1315 25241 P5 1000 MPPRP-M 0.017438 23 97 MPPRP-W 1.072332 1505 29740 MSDFPB 3.218997 3680 81797 PRP 2.889154 3696 81825 MPRP 2.963704 3735 81946 TPRP 2.948045 3682 81802 DFPB1 3.040253 3704 81880 DFPB2 3.045603 3683 81803 MHS 3.220028 3678 81791 P5 2000 MPPRP-M 0.021613 25 106 MPPRP-W 5.944706 4206 95732 MSDFPB 17.227153 10341 260347 PRP 16.073363 10355 260379 MPRP 16.960973 10394 260490 TPRP 16.826907 10343 260352 DFPB1 16.257104 10364 260424 DFPB2 16.739385 10345 260356 MHS 17.503252 10339 260342 P6 100 MPPRP-M 0.217224 1182 3717 MPPRP-W 0.193153 1148 3605 MSDFPB 0.237793 1186 3706 PRP 0.220389 1207 3766 MPRP 0.224211 1186 3658 TPRP 0.248104 1195 3722 DFPB1 0.236804 1201 3750 DFPB2 0.235470 1170 3620 MHS 0.236250 1198 3720 P6 500 MPPRP-M 0.944828 1348 5050 MPPRP-W 0.794555 1278 4482 MSDFPB 0.797832 1185 3968 PRP 0.911934 1322 4267 MPRP 0.845336 1258 3983 TPRP 0.912343 1326 4335 DFPB1 0.903852 1321 4300 DFPB2 1.020228 1299 4143 MHS 0.918611 1307 4147 P6 1000 MPPRP-M 2.337405 1495 6667 MPPRP-W 1.680275 1381 4951 MSDFPB 1.536904 1331 4320 PRP 1.851282 1389 4553 MPRP 1.647356 1277 4040 TPRP 1.686418 1273 4109 DFPB1 1.792886 1360 4409 DFPB2 1.812678 1343 4330 MHS 1.775351 1341 4283 P6 2000 MPPRP-M 6.738143 1737 11016 MPPRP-W 3.257538 1440 4936 MSDFPB 3.065903 1381 4524 PRP 3.574894 1390 4522 MPRP 3.381889 1316 4247 TPRP 3.565741 1378 4510 DFPB1 3.609619 1377 4511 DFPB2 3.647662 984 3232 MHS 3.608315 1362 4397
 Problem Dim Method CPU-time Ni Nf P1 100000 MPPRP-M 0.489058 20 63 MPPRP-W 2.005818 42 268 MSDFPB 4.565133 79 717 PRP 4.757480 87 825 MPRP 4.561151 79 796 TPRP 4.631983 79 796 DFPB1 4.763311 82 806 DFPB2 4.610199 74 776 MHS 4.879031 79 796 P2 100000 MPPRP-M 0.483632 20 63 MPPRP-W 2.251160 42 268 MSDFPB 4.375745 79 717 PRP 4.561437 87 825 MPRP 4.830143 79 796 TPRP 4.560123 79 796 DFPB1 4.427558 82 806 DFPB2 4.550071 74 776 MHS 4.428999 79 796 P3 100000 MPPRP-M 1.187410 20 63 MPPRP-W 5.215137 46 306 MSDFPB 10.136405 79 710 PRP 10.808303 87 816 MPRP 10.124779 79 789 TPRP 10.068535 79 789 DFPB1 10.372113 82 799 DFPB2 9.458892 71 757 MHS 10.137711 79 789 P4 100000 MPPRP-M 9.152258 253 1306 MPPRP-W 11.371204 260 1423 MSDFPB 13.374780 293 1837 PRP 8.352463 170 1352 MPRP 9.223706 184 1474 TPRP 14.279675 302 2184 DFPB1 15.329979 336 2386 DFPB2 11.244725 231 1770 MHS 17.738408 216 2902 P5 2000 MPPRP-M 0.019793 25 106 MPPRP-W 7.148874 4207 95776 MSDFPB 16.293374 10362 260980 PRP 16.689685 10375 271376 MPRP 16.112906 10415 271538 TPRP 16.006655 10364 271349 DFPB1 16.272868 10385 271442 DFPB2 17.018004 10366 271355 MHS 19.200453 10360 271335 P6 2000 MPPRP-M 9.722610 3415 15767 MPPRP-W 8.031480 3293 12678 MSDFPB 8.254000 3395 12524 PRP 9.187302 3298 15178 MPRP 9.667492 3366 15503 TPRP 9.727760 3363 15708 DFPB1 9.565437 3431 15990 DFPB2 9.316936 3329 15590 MHS 8.893926 3393 14502
 Problem Dim Method CPU-time Ni Nf P1 1000000 MPPRP-M 4.831098 22 69 P2 1000000 MPPRP-M 5.150450 22 69 P3 1000000 MPPRP-M 13.954404 26 81 P4 1000000 MPPRP-M 8.408822 19 120 P5 1000000 MPPRP-M 9.018623 36 154

For simplification of statement, we use the following notations:

Dim: the dimension of test problems.

Ni: the number of iterations.

Nf: the number of function evaluations.

MPPRP-M: the developed algorithm with determined by (19) in this paper.

MPPRP-W: the developed algorithm with generated by (18) in this paper.

MSDFPB: the modified spectral derivative-free projection-based algorithm in .

PRP: the PRP conjugate gradient derivative-free projection-based methods in .

MPRP: the modified PRP conjugate gradient derivative-free projection-based methods in .

TPRP: the two-term PRP conjugate gradient derivative-free projection-based methods in .

DFPB1: the steepest descent derivative-free projection-based methods in  with search direction as follows:where , , , and .

DFPB2: the steepest descent derivative-free projection-based methods in  with in (62) being replaced by

MHS: the MHS-PRP conjugate gradient derivative-free projection-based methods in .

From the results in Tables 1, 2, and 3, it follows that our algorithm (MPPRP) outperforms the other seven algorithms, no matter how to choose the initial points (see the italicized results). Especially, it seems to more efficiently solve large-scale test problems. Actually, Table 3 shows that MPPRP-M can solve the first five Problems with dimension of 1000000 in less time, compared with the other algorithms.

In order to further measure the efficiency difference of all the eight algorithms, we calculate the average number of iteration, the average consumed CPU time, and their standard deviations, respectively. In Table 4, A-Ni and Std-Ni stand for the average number of iteration and its standard deviation, respectively. A-CT and Std-CT represent the average consumed CPU time and its standard deviation, respectively. The average number of function evaluation and its standard deviation are denoted by A-Nf and Std-Nf, respectively. Clearly, Std-Ni, Std-Nf, and Std-CT can show robustness of all the algorithms.

 Method (A-CT, Std-CT) (A-Ni, Std-Ni) (A-Nf, Std-CT) MPPRP-M (1.1657, 2.5775) (330.7, 763.88) (1513.5, 3652.3) MPPRP-W (2.5725, 2.9258) (689.4, 1199.35) (9205.9, 24233.7) MSDFPB (5.4010, 5.6632) (1244.56, 2638.61) ( 23143.6, 66309.9) PRP (5.3178, 5.5599) (1247.73, 2641.15) (23583.0, 67602.4) MPRP (5.3717, 5.6475) (1243.36, 2654.63) (23574.3, 67649.2) TPRP (5.5309, 5.7401) ( 1249.63, 2636.94) (23633.4, 67579.6) DFPB1 (5.8718, 6.0331) (1275.9, 2636.85) (23751.2, 67568.5) DFPB2 (5.3586, 5.6541) (1223.93, 2642.11) (23543.4, 67612.0) MHS (6.2549, 6.7712) (1248.8, 2637.78) (23719.2, 67539.8)
 Problem Dim Method CPU-time Ni Nf P7 500 MPPRP-M 0.907980 6684 20055 MPPRP-W 0.823281 6684 20055 MSDFPB 1.063594 6684 20055 PRP 0.801892 6695 20088 MPRP 0.856992 6684 20055 TPRP 0.892188 6684 20055 DFPB1 0.906388 6690 20073 DFPB2 0.894474 6684 20055 MHS 1.045179 6684 20055 P7 1000 MPPRP-M 1.742640 8424 25275 MPPRP-W 1.787419 8424 25275 MSDFPB 2.140070 8424 25275 PRP 1.773320 8435 25308 MPRP 1.878275 8424 25275 TPRP 1.856212 8424 25275 DFPB1 1.855210 8430 25293 DFPB2 1.869769 8424 25275 MHS 2.085814 8424 25275 P7 2000 MPPRP-M 3.712167 10616 31851 MPPRP-W 4.182644 10616 31851 MSDFPB 4.622078 10617 31855 PRP 4.465109 10628 31888 MPRP 4.716474 10617 31855 TPRP 5.772379 10617 31855 DFPB1 5.357838 10622 31870 DFPB2 4.663553 10616 31852 MHS 5.737888 10617 31855 P7 5000 MPPRP-M 20.734981 14412 43239 MPPRP-W 21.112604 14413 43243 MSDFPB 23.963617 14414 43252 PRP 22.231907 14425 43283 MPRP 23.381419 14414 43252 TPRP 23.871918 14414 43252 DFPB1 23.222197 14420 43269 DFPB2 23.300951 14414 43252 MHS 25.432421 14414 43252

The results in Table 4 demonstrate that both of MPPRP-M and MPPRP-W outperform the other seven algorithms.

In the end of this section, we use our algorithm to solve a singular system of equations. The next test problem is a modified version of problem 1.

Problem 20. The elements of are given by The initial point is fixed as .

Note that problem 7 is singular since zero is its solution and the Jacobian matrix is singular. We implement Algorithm 1 to solve problem 7 with different dimensions to test whether it can find the singular solution or not.

The results in Table 5 indicate that Algorithm 1 is also efficient to solve the singular system of equations.

#### 5. Preliminary Application in Compressed Sensing

In this section, we will apply our algorithm to solve an engineering problem originated from compressed sensing of sparse signals.

Let be a linear operator, let be a sparse or a nearly sparse original signal, and let be an observed value which satisfies the following linear equations:The original signal is desirable to be reconstructed from the linear equations . Unfortunately, it is often that this linear equation is underdetermined or ill-conditioned in practice, and has infinitely many solutions. From the fundamental principles of compressed sensing, it is a reasonable approach to seek the sparsest one among all the solutions, which contains the least nonzero components.

In , it was shown that the compressed sensing of sparse signals can be formulated the following nonlinear system of equations:where where , , and , for all with , and is an -dimensional vector with all elements being one. Clearly, (66) is nonsmooth.

Implement Algorithm 1 to solve the compressed sensing problem of sparse signals with different compressed ratios. In this experiment, we consider a typical compressive sensing scenario, where the goal is to reconstruct an -length sparse signal from observations. We test the three algorithms (two popular algorithms: CGD in  and SGCS in , and MPPRP-M) under three CS ratios (CS-R): , 0.25, 0.5 corresponding to different numbers of measurements with , respectively. The original sparse signal contains 32(64) randomly nonzero elements. The measurement is disturbed by noise, i.e., , where is the Gaussian noise distributed as with . is the Gaussian matrix generated by commend randn(, ) in Matlab. The quality of restoration is measured by the mean of squared error (MSE) to the original signal ; that is,where is the restored signal. The iterative process starts at the measurement image, i.e., , and terminates when the relative change between successive iterates falls below . It says thatwhere and is chosen as suggested in . The other parameters in this experiment are same as those in the numerical experiments conducted in Section 4. Numerical efficiency is shown in Table 6.

 CS-R CGD SGCS MPPRP-M Iter Time MSE Iter Time MSE Iter Time MSE 0.125 1047 4.97s 7.96e-006 752 3.44s 4.40e-006 607 2.58s 4.18e-006 (985) (4.86s) (5.11e-003) (1018) (4.19s) (4.29e-003) 886 (3.63s) (4.16e-003) 0.25 263 2.05s 2.87e-006 241 1.92s 2.75e-006 178 1.38s 2.64e-006 (425) (3.34s) (7.12e-006) (359) (2.84s) (5.70e-006) (274 ) ( 2.20s) ( 5.54e-006) 0.5 97 1.33s 2.46e-006 135 1.75s 1.59e-006 85 1.16s 1.54e-006 (95) (1.33s) (1.14e-005) (146) (2.03s) ( 5.52e-006) (93) (1.30s) (5.37e-006)

Clearly, from the results in Table 6, it follows that, for any CS ratio, MPPRP-M can recover the sparse signals more efficiently without reduction of recovery quality (see the italicized results in Table 6). If the sparsity level 32 is replaced by 64 in the 2048-length original signal, the corresponding results are also presented in Table 6, denoted by .

#### 6. Conclusions

In this paper, we have presented a modified spectral PRP conjugate gradient derivative-free projection-based method for solving the large-scale nonlinear monotonic equations, where the search direction is proved to be sufficiently descent for any line search rule, and the step length is chosen by a line search which can overcome the difficulty of choosing an appropriate weight.

Under mild assumptions, global convergence theory has been established for the developed algorithm. Since our algorithm does not involve computing the Jacobian matrix or its approximation, both information storage and computational cost of the algorithm are lower. That is to say, our algorithm is helpful to solution of large-scale problems. In addition, it has been shown that our algorithm is also applicable to solve a nonsmooth system of equations or a singular one.

Numerical tests have demonstrated that our algorithm outperforms the others by costing less number of function evaluations, less number of iterations, or less CPU time to find a solution with the same tolerance, especially in comparison with some similar algorithms available in the literature. Efficiency of our algorithm has also been shown by reconstructing sparse signals in compressed sensing.

In future research, due to its satisfactory numerical efficiency, the proposed method in this paper can be extended into solving more large-scale nonlinear system of equations from many fields of sciences and engineering.

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Conflicts of Interest

We declare that all the authors have no any conflicts of interest about submission and publication of this paper.

#### Authors’ Contributions

Zhong Wan conceived and designed the research plan and wrote the paper. Jie Guo performed the mathematical modelling and numerical analysis and wrote the paper.

#### Acknowledgments

This research is supported by the National Natural Science Foundation of China (Grant no. 71671190) and Opening Foundation from State Key Laboratory of Developmental Biology of Freshwater Fish, Hunan Normal University.

1. S. Huang and Z. Wan, “A new nonmonotone spectral residual method for nonsmooth nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 313, pp. 82–101, 2017. View at: Publisher Site | Google Scholar | MathSciNet
2. Z. Wan, J. Guo, J. Liu, and W. Liu, “A modified spectral conjugate gradient projection method for signal recovery,” Signal, Image and Video Processing, vol. 12, no. 8, pp. 1455–1462, 2018. View at: Publisher Site | Google Scholar
3. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Siam, New York, NY, USA, 1970. View at: MathSciNet
4. J. E. Dennis and B. R. Schnabel, “Numerical methods for nonlinear equations and unconstrained optimization,” Classics in Applied Mathematics, p. 16, 1983. View at: Google Scholar
5. W. Zhou and D. Li, “Limited memory {BFGS} method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007. View at: Google Scholar | MathSciNet
6. J. M. Martinez, “A family of quasi-Newton methods for nonlinear equations with direct secant updates of matrix factorizations,” SIAM Journal on Numerical Analysis, vol. 27, no. 4, pp. 1034–1049, 1990. View at: Publisher Site | Google Scholar | MathSciNet
7. G. Fasano, F. Lampariello, and M. Sciandrone, “A truncated nonmonotone Gauss-Newton method for large-scale nonlinear least-squares problems,” Computational Optimization and Applications, vol. 34, no. 3, pp. 343–358, 2006. View at: Publisher Site | Google Scholar | MathSciNet
8. D. Li and M. Fukushima, “A globally and superlinearly convergent Gauss-Newton-based BFGS method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp. 152–172, 1999. View at: Publisher Site | Google Scholar | MathSciNet
9. Z. Papp and S. Rapajić, “FR type methods for systems of large-scale nonlinear monotone equations,” Applied Mathematics and Computation, vol. 269, pp. 816–823, 2015. View at: Publisher Site | Google Scholar | MathSciNet
10. Q. Li and D.-H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis (IMAJNA), vol. 31, no. 4, pp. 1625–1635, 2011. View at: Publisher Site | Google Scholar | MathSciNet
11. L. Zhang, W. Zhou, and D. H. Li, “A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis (IMAJNA), vol. 26, no. 4, pp. 629–640, 2006. View at: Publisher Site | Google Scholar | MathSciNet
12. W. La Cruz, J. M. Martnez, and M. Raydan, “Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,” Mathematics of Computation, vol. 75, no. 255, pp. 1429–1448, 2006. View at: Publisher Site | Google Scholar | MathSciNet
13. Z. Wan, W. Liu, and C. Wang, “A modified spectral conjugate gradient projection method for solving nonlinear monotone symmetric equations,” Pacific Journal of Optimization. An International Journal, vol. 12, no. 3, pp. 603–622, 2016. View at: Google Scholar | MathSciNet
14. M. Ahookhosh, K. Amini, and S. Bahrami, “Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations,” Numerical Algorithms, vol. 64, no. 1, pp. 21–42, 2013. View at: Publisher Site | Google Scholar | MathSciNet
15. Q.-R. Yan, X.-Z. Peng, and D.-H. Li, “A globally convergent derivative-free method for solving large-scale nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 234, no. 3, pp. 649–657, 2010. View at: Publisher Site | Google Scholar | MathSciNet
16. K. Amini, A. Kamandi, and S. Bahrami, “A double-projection-based algorithm for large-scale nonlinear systems of monotone equations,” Numerical Algorithms, vol. 68, no. 2, pp. 213–228, 2015. View at: Publisher Site | Google Scholar | MathSciNet
17. Y. Li, Z. Wan, and J. Liu, “Bi-level programming approach to optimal strategy for vendor-managed inventory problems under random demand,” The ANZIAM Journal, vol. 59, no. 2, pp. 247–270, 2017. View at: Publisher Site | Google Scholar | MathSciNet
18. T. Li and Z. Wan, “New adaptive Barzilar-Borwein step size and its application in solving large scale optimization problems,” The ANZIAM Journal, vol. 61, no. 1, pp. 76–98, 2019. View at: Publisher Site | Google Scholar | MathSciNet
19. X. J. Tong and L. Qi, “On the convergence of a trust-region method for solving constrained nonlinear equations with degenerate solutions,” Journal of Optimization Theory and Applications, vol. 123, no. 1, pp. 187–211, 2004. View at: Publisher Site | Google Scholar | MathSciNet
20. K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quarterly of Applied Mathematics, vol. 2, pp. 164–168, 1944. View at: Publisher Site | Google Scholar | MathSciNet
21. D. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal on Applied Mathematics, vol. 11, no. 2, pp. 431–441, 1963. View at: Publisher Site | Google Scholar | MathSciNet
22. C. Kanzow, N. Yamashita, and M. Fukushima, “Levenberg-Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints,” Journal of Computational and Applied Mathematics, vol. 173, no. 2, pp. 321–343, 2005. View at: Publisher Site | Google Scholar | MathSciNet
23. J. K. Liu and S. J. Li, “A projection method for convex constrained monotone nonlinear equations with applications,” Computers & Mathematics with Applications, vol. 70, no. 10, pp. 2442–2453, 2015. View at: Publisher Site | Google Scholar
24. G. Yuan and M. Zhang, “A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015. View at: Publisher Site | Google Scholar | MathSciNet
25. G. Yuan, Z. Meng, and Y. Li, “A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations,” Journal of Optimization Theory and Applications, vol. 168, no. 1, pp. 129–152, 2016. View at: Publisher Site | Google Scholar | MathSciNet
26. G. Yuan and W. Hu, “A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations,” Journal of Inequalities and Applications, vol. 2018, no. 1, article 113, 2018. View at: Publisher Site | Google Scholar | MathSciNet
27. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006. View at: Publisher Site | Google Scholar | MathSciNet
28. W. Cheng, “A PRP type method for systems of monotone equations,” Mathematical and Computer Modelling, vol. 50, no. 1-2, pp. 15–20, 2009. View at: Publisher Site | Google Scholar | MathSciNet
29. M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Springer, Boston, MA, USA, 1998. View at: Publisher Site | Google Scholar | MathSciNet
30. Z. Wan, Z. Yang, and Y. Wang, “New spectral PRP conjugate gradient method for unconstrained optimization,” Applied Mathematics Letters, vol. 24, no. 1, pp. 16–22, 2011. View at: Publisher Site | Google Scholar | MathSciNet
31. Y. Ou and J. Li, “A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints,” Applied Mathematics and Computation, vol. 56, no. 1-2, pp. 195–216, 2018. View at: Publisher Site | Google Scholar | MathSciNet
32. W. Zhou and D. Li, “On the Q-linear convergence rate of a class of methods for monotone nonlinear equations,” Pacific Journal of Optimization, vol. 14, pp. 723–737, 2018. View at: Google Scholar
33. B. T. Polyak, Introduction to Optimization, Optimization Software, New York, NY, USA, 1987. View at: MathSciNet
34. Y. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013. View at: Publisher Site | Google Scholar | MathSciNet
35. Y. Xiao, Q. Wang, and Q. Hu, “Non-smooth equations based method for l1 problems with applications to compressed sensing,” Nonlinear Analysis. Theory, Methods and Applications, vol. 74, no. 11, pp. 3570–3577, 2011. View at: Publisher Site | Google Scholar | MathSciNet
36. S. Kim, K. Koh, M. Lustig et al., “A method for large-scale 1-regularized least squares problems with applications in signal processing and statistics,” Technical Report, Dept. of Electrical Engineering, Stanford University, Stanford, Calif, USA, 2007. View at: Google Scholar

#### More related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.