Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 5261830 | 17 pages | https://doi.org/10.1155/2019/5261830

A Modified Spectral PRP Conjugate Gradient Projection Method for Solving Large-Scale Monotone Equations and Its Application in Compressed Sensing

Academic Editor: Higinio Ramos
Received24 Nov 2018
Accepted24 Mar 2019
Published08 Apr 2019

Abstract

In this paper, we develop an algorithm to solve nonlinear system of monotone equations, which is a combination of a modified spectral PRP (Polak-Ribière-Polyak) conjugate gradient method and a projection method. The search direction in this algorithm is proved to be sufficiently descent for any line search rule. A line search strategy in the literature is modified such that a better step length is more easily obtained without the difficulty of choosing an appropriate weight in the original one. Global convergence of the algorithm is proved under mild assumptions. Numerical tests and preliminary application in recovering sparse signals indicate that the developed algorithm outperforms the state-of-the-art similar algorithms available in the literature, especially for solving large-scale problems and singular ones.

1. Introduction

In many fields of sciences and engineering, solution of a nonlinear system of equations is a fundamental problem. For example, in [1, 2], both a Nash economic equilibrium problem and a signal processing problem were formulated into a nonlinear system of equations. Owing to complexity, in the past five decades, numerous algorithms and some software packages in virtue of those powerful algorithms have been developed for solving the nonlinear system of equations. See, for example, [1, 316] and the references therein. Nevertheless, in practice, no any algorithm can efficiently solve all the systems of equations arising from sciences and engineering. It is significant to develop a specific algorithm to solve the problems with different analytic and structural features [17, 18].

In this paper, we consider the following nonlinear system of monotone equations:where is a continuous and monotone function; that is to say, satisfiesIt has been shown that the solution set of problem (1) is convex if it is nonempty [3]. In addition, throughout the paper, the space is equipped with the Euclidean norm and the inner product , for .

Aiming at solution of problem (1), many efficient methods were developed recently. Only by incomplete enumeration, we here mention the trust region method [19], the Newton and the quasi-Newton methods [46, 19], the Gauss-Newton methods [7, 8], the Levenberg-Marquardt methods [2022], the derivative-free methods and its modified versions [916, 2327], the derivative-free conjugate gradient projection method [14], the modified PRP (Polak-Ribière-Polyak) conjugate gradient method [11], the TPRP method [10], the PRP-type method [28], the projection method [23], the FR-type method [9], and the modified spectral conjugate gradient projection method [13]. Summarily, the spectral gradient methods and the conjugate gradient methods are more popular in solving a large-scale nonlinear system of equations than the Newton and the quasi-Newton methods. One of the former’s advantages lies in that there is no requirement of computing and storing the Jacobian matrix or its approximation.

Specifically, Li introduced a class of methods for large-scale nonlinear monotone equations in [10], which include the SG-liked method, the MPRP method, and the TPRP method. Chen [28] proposed a PRP method for large-scale nonlinear monotone equations. A descent modified PRP method and FR-type methods were presented in [9, 11], respectively. Liu and Li proposed a projection method for convex constrained monotone nonlinear equations in [23]. Two derivative-free conjugate gradient projection methods were presented for such a system in [14]. Three extensions of conjugate gradient algorithms were developed in [2426], respectively. Based on the projection technique in [29], Wan and Liu proposed a modified spectral conjugate gradient projection method (MSCGP) to solve a nonlinear monotone system of symmetric equations in [13]. Then, in [2], MSCGP was successfully applied into recovering sparse signals and restoring blurred images.

It is noted that the main idea of [13, 23] is to construct a search direction by projection technique such that it is sufficiently descent. In virtue of derivative-free and low storage properties, numerical experiments indicated that the developed algorithm in [13] is more efficient to solve large-scale nonlinear monotone systems of equations than the similar ones available in the literature.

In [30], Yang et al. proposed a modified spectral PRP conjugate gradient method for solving unconstrained optimization problem. It was proved that the search direction at each iteration is a descent direction of objective function and global convergence was established under mild conditions. Our research interest in this paper is to study how to extend this method to solution of problem (1). Specifically, we should address the following issues:

(1) Without need of derivative information of the function , how to determine the spectral and conjugate parameters to get a sufficiently descent search direction at each iteration?

(2) To ensure global convergence of algorithm, how to choose an appropriate step length for the given search direction? Particularly, monotonicity of should be utilized to design a new iterate scheme?

(3) What about the numerical performance of new iteration scheme? Especially, whether it is more efficient or not than the similar algorithms available in the literature.

Note that a new line search rule was proposed in [31] for solving nonlinear monotone equations with convex constraints, and it was shown that, in virtue of this line search, the developed algorithm has good numerical performance. However, the presented line search is involved with choice of a weight. Since it may be difficult to choose an appropriate weight in the practical implementation of algorithm, we attempt to overcome this difficulty in this paper.

Summarily, we intend to propose a modified spectral PRP conjugate gradient derivative-free projection method for solving large-scale nonlinear equations. Global convergence of this method will be proved, and numerical tests will be conducted by implementing the developed algorithm to solve benchmark large-scale test problems and to reconstruct sparse signals in compressive sensing.

The rest of this paper is organized as follows. In Section 2, we first state the idea to propose a new spectral PRP conjugate gradient method. Then, a new algorithm is developed. Global convergence is established in Section 3. Section 4 is devoted to numerical experiments. Preliminary application of the algorithm is presented in Section 5. Some conclusions are drawn in Section 6.

2. Development of New Algorithm

In this section, we will state how to develop a new algorithm in detail.

2.1. Projection Method

Generally, to solve (1), we need to construct an iterative format as follows:where is called a step length and is a search direction. Let . If satisfiesthen a projection method can be obtained for solving Problem (1). Actually, by monotonicity of , it holds thatfor any solution of (1), . With such a , we define a hyperplane:From (4) and (5), it is clear that strictly separates the iterate point from the solution . Thus, the projection of onto is closer to than . Consequently, the iterative formatis referred to as the projection method proposed in [29]. Both analytic properties and numerical results have shown efficiency and robustness of the projection-based algorithms for monotone system of equations [10, 13, 14]. In this paper, we intend to propose a new spectral conjugate gradient method also in virtue of the above projection technique.

2.2. A Modified Spectral PRP Conjugate Gradient Method

In the projection method (7), it is noted that must satisfy (4). That is to say, should be a search direction satisfyingVery recently, Wan et al. [13] proposed a modified spectral conjugate gradient projection method for solving nonlinear monotone symmetric equations, where was chosen byand and are computed byrespectively, , and . It was proved in [13] that given by (9) and (10) is sufficiently descent and satisfies .

Note that a modified spectral PRP conjugate gradient method was proposed for solving unconstrained optimization problems in [30]. Similar to the idea in [13], we can extend the method in [30] to solution of problem (1). Specifically, we compute and in (9) byrespectively. Although (11) gives different choices of and from (10), we can also prove the following result.

Proposition 1. Let be given by (9) and (11). Then, for any , the following equality holds:

Proof. For , we haveFor , we haveWe now prove that ifholds for (), then (12) also holds for .
Actually,where the forth equality follows condition (15). Consequently, by mathematical induction, (12) holds for any .

Proposition 1 ensures that the idea of projection method can be incorporated into design of iteration schemes to solve (1) as the search direction is determined by (9) and (11).

2.3. Modified Line Search Rule

Since it is critical to choose an appropriate step length to improve the performance of the iterate scheme (3), as well as determination of search directions, we now present an inexact line search rule to determine in (3).

Very recently, Ou and Li [31] presented a line search rule as follows: find a nonnegative step length such that the following inequality is as follows:where is a fixed search direction, is a given initial step length, and are two given constants, and is specified byIn [31], it was required that in (18) satisfies . Clearly, and are the weights for the values 1 and , respectively. In the practical implementation, it is may be difficult to choose an appropriate . To overcome this difficulty, we choose in (18) byIt is sure that, for the new method as a combination of (9), (11), (17), and (19), we need to establish its convergence theory and to further test its numerical performance.

Remark 2. In [31], to ensure that well-defined the line search (17) is well-defined, it is assumed that satisfieswhere is a given constant. By Proposition 1, it is clear that chosen by (9) and (11) satisfies (20) as .

2.4. Development of New Projection-Based Algorithm

With the above preparation, we are in a position to develop an algorithm to solve problem (1) by combining the projection technique and the new methods to determine a search direction and a step length.

We now present the computer procedure of Algorithm 1.

Input:
An initial point , positive constants , , , and .
Begin:
;
;
While ( and )
Step1. (Search direction)
Compute by (9) and (11).
Step2. (Step length)
;
Compute .
While ()
;
Compute .
End While
;
Step3. (Projection and update)
;
If
;
Break.
End If
Compute by (7);
;
.
End While
End

Remark 3. Since Algorithm 1 does not involve computing the Jacobian matrix of or its approximation, both information storage and computational cost of the algorithm are lower. In virtue of this advantage, Algorithm 1 is helpful to solution of large-scale problems. In next section, we will prove that Algorithm 1 is applicable even if is nonsmooth. Our numerical tests in Section 4 will further show that Algorithm 1 can find a singular solution of problem (1) (see problem 5 and Table 5).

Remark 4. Compared with the algorithm developed in [13], problem (1) is not assumed to be a symmetric system of equations.

3. Convergence

In this section, we are going to study global convergence of Algorithm 1.

Apart from different choices of search direction and step length, Algorithm 1 can be treated as a variant of the projection algorithm in [29]. So, similar to some critical points of establishing global convergence in [29], we attempt to prove that Algorithm 1 is globally convergent. Very recently, locally linear convergence was proved in [32] for some PRP-type projection methods.

We first state the following mild assumptions.

Assumption 5. The function is monotone on .

Assumption 6. The solution set of problem (1) is nonempty.

Assumption 7. The function is Lipschitz continuous on ; namely, there exists a positive constant such that for all ,

Under these assumptions, we can prove that Algorithm 1 has the following nice properties.

Lemma 8. Let be a sequence generated by Algorithm 1. If Assumptions 5, 6, and 7 hold, then
(1) for any , such that ,(2) The sequence is bounded.
(3) If is a finite sequence, then the last iterate point is a solution of problem (1); otherwise,and(4) The sequence is bounded. Hence, there exists a constant such that .

Proof. (1) Let be any point such that . Then, by monotonicity of , we haveFrom (7), it is also easy to verify that is the projection of onto the halfspace:Thus, it follows from (25) that belongs to this halfspace. From the basic properties of projection operator [33], we know thatConsequently,The desired result (22) is directly obtained from (28).
(2) From (28), it is clear that the sequence is nonnegative and decreasing. Thus, is a convergent sequence. It is concluded that is bounded.
(3) From (28), we knowThus,Since the sequence is bounded, the series is convergent. Consequently,The third result has been proved.
(4) For any , by Lipschitz continuity, we haveSince is convergent, we conclude that is bounded. Consequently, for all , there exists a constant such that .

Lemma 8 indicates that, for the sequence generated by Algorithm 1, the sequence is decreasing and convergent, and the sequence is bounded, where is any solution of problem (1).

Lemma 9. Suppose that Assumptions 5, 6, and 7 hold. Let be a sequence generated by Algorithm 1. If there exists a constant such that, for any positive integer ,Then, the sequence of directions is bounded; i.e., there exists a constant such that, for any positive integer ,

Proof. From (9), (11), (12), and the results of Lemma 8, it follows thatFrom (24), we know that there exist a positive integer and a positive number () such that, for all , Hence,Let . TakeThen, holds for any positive integer .

Lemma 10. Suppose that Assumptions 5, 6, and 7 hold. Let and be two sequences generated by Algorithm 1. Then, the line search rule (17) of Step 2 in Algorithm 1 is well-defined.

Proof. Our aim is to show that the line search rule (17) terminates finitely with a positive step length . By contradiction, suppose that, for some iterate indexes such as , condition (17) does not hold. As a result, for all ,From (18) and the termination condition of Algorithm 1, it follows that, for all ,By taking the limit as in both sides of (39), we haveEquations (41) contradicts the fact that for all . That is to say, the line search rule terminates finitely with a positive step length ; i.e., the line search step of Algorithm 1 is well-defined.

With the above preparation, we are now state the convergence result of Algorithm 1.

Theorem 11. Suppose that Assumptions 5, 6, and 7 hold. Let be a sequence generated by Algorithm 1. Then,

Proof. For the sake of contradiction, we suppose that the conclusion is not true. Then, by the definition of inferior limit, there exists a constant such that, for any ,Consequently, fromit follows that for any .
From (7), (17), and (40), we getCombining (24) and (45), we obtainSince for any , we haveClearly, does not satisfy in (17). It says thatBy Lemmas 8 and 9, we know that the two sequences and are bounded. Without loss of generality, we choose a subset such thatTaking the limit in the two sides of (48) as (), it holds thatOn the other hand, from (43), we know thatBy taking the limit in the two sides of (51) for , we getIt contradicts (50). Thus, the proof of Theorem 11 has been completed.

Remark 12 (only with being generated by (19)). As is determined by (18), the proofs are similar.

Since the proof of Theorem 11 does not involve differentiability of , let alone nonsingularity of its Jacobian matrix, we know that the following result holds.

Corollary 13. For any nonsmooth or singular function , let be a sequence generated as Algorithm 1 is used to solve problem (1). Under Assumptions 5, 6, and 7, it holds that

It should be pointed out that the global convergence of Algorithm 1 depends on assumption on monotonicity of . For nonmonotonic function , Algorithm 1 may be not applicable.

4. Numerical Experiments

In this section, by numerical experiments, we are going to study the effectiveness and robustness of Algorithm 1 for solving large-scale system of equations.

We first collect some benchmark test problems available in the literature.

Problem 14 (see [5]). The elements of are given by

Problem 15 (see [5]). The elements of are given by

Problem 16 (see [15]). The elements of are given by

Problem 17 (see [28]). The elements of are given byClearly, problem 4 is a linear system of equations, which is used to test Algorithm 1 in this special case.

Problem 18 (see [15]). The elements of are given by

Problem 19 (see [15]). The elements of are given by

Clearly, apart from problem 4, all the others are nonlinear system of equations. Problem 15 is nonsmooth at point . The size of all the test problems is variable. If this size is larger than 1000, the problem can be regarded to be large-scale. We solve all the test problems with different sizes by Algorithm 1, especially in comparison with the existing seven similar algorithms, such as those developed very recently in [10, 1315, 28].

All the algorithms are coded in MATLAB R2010b and run on a personal computer with a 2.2GHZ CPU processor, 8GB memory, and Windows 10 operation system. The relevant parameters of algorithms are specified by where . The termination condition is .

Numerical performance of all the algorithms is reported in Tables 1, 2, and 3. Table 1 shows the numerical performance of all the eight algorithms with the fixed initial points: for problems 1, 5, and 6; for problems 2 and 4; for problem 3. Table 2 demonstrates the numerical performance of all the eight algorithms with initial points randomly generated by Matlab’s Code “rand(n,1)”. Table 3 shows the numerical performance of our algorithm with dimension of 1000000.


Problem DimMethodCPU-timeNiNf

P110000MPPRP-M0.0491061960
MPPRP-W0.08398628132
MSDFPB0.20739649357
PRP0.21051357373
MPRP0.22543349357
TPRP0.20519349357
DFPB10.20819152360
DFPB20.18087141333
MHS0.20120749357
P120000MPPRP-M0.0977512063
MPPRP-W0.20221434187
MSDFPB0.53891063522
PRP0.60134072544
MPRP0.57852863522
TPRP0.56704263522
DFPB10.56431767534
DFPB20.53840356503
MHS0.56824863522
P150000MPPRP-M0.1895822063
MPPRP-W0.77651347317
MSDFPB2.34939493913
PRP2.426823101926
MPRP2.37322993913
TPRP2.3239993913
DFPB12.42155697925
DFPB22.36417386894
MHS2.60123593913
P1100000MPPRP-M0.5190872166
MPPRP-W2.76299161478
MSDFPB9.0024931271384
PRP9.1083081351402
MPRP9.4487881271384
TPRP8.9867771271384
DFPB19.1005241301391
DFPB28.8197551191360
MHS9.4197151271384
P210000MPPRP-M0.0545351960
MPPRP-W0.08881128132
MSDFPB0.21759049357
PRP0.21864557373
MPRP0.22653749357
TPRP0.22642949357
DFPB10.21047052360
DFPB20.20457841333
MHS0.23444249357
P220000MPPRP-M0.0899172063
MPPRP-W0.19882234187
MSDFPB0.56335863522
PRP0.60477272544
MPRP0.67029463522
TPRP0.67781963522
DFPB10.67916567534
DFPB20.60735756503
MHS0.58987263522
P250000MPPRP-M0.1911322063
MPPRP-W0.72010847317
MSDFPB2.24877993913
PRP2.464597101926
MPRP2.35317393913
TPRP2.34277793913
DFPB12.36339297925
DFPB22.27978286894
MHS2.38406093913
P2100000MPPRP-M0.5160812166
MPPRP-W2.73483461478
MSDFPB9.0042991271384
PRP8.7634701351402
MPRP8.6478551271384
TPRP8.6265931271384
DFPB18.6265931301391
DFPB28.4768361191360
MHS8.9054541271384
P35000MPPRP-M0.0778812269
MPPRP-W0.9558141271360
MSDFPB2.1550002423141
PRP2.3469502513167
MPRP2.2008782423141
TPRP2.2043382423141
DFPB12.2464502453148
DFPB22.2090592343117
MHS2.2363892423141
P310000MPPRP-M0.1454812372
MPPRP-W2.5908621742059
MSDFPB6.7495663374751
PRP6.8929303464767
MPRP6.8432733374751
TPRP6.8852663374751
DFPB17.2221193414763
DFPB26.8819193304732
MHS6.9920603374751
P315000MPPRP-M0.1902252372
MPPRP-W4.8944702122656
MSDFPB12.2425144116059
PRP13.4948944206078
MPRP13.5476764116059
TPRP12.2556264116059
DFPB112.4738084156071
DFPB212.7347514046040
MHS12.4250264116059
P320000MPPRP-M0.2396152372
MPPRP-W7.5052292423126
MSDFPB18.3054604727144
PRP19.1959114817169
MPRP19.4827894727144
TPRP19.0111124727144
DFPB119.1658004757151
DFPB219.4285124647120
MHS18.9283984727144
P410000MPPRP-M0.06154916102
MPPRP-W0.11315319131
MSDFPB0.28969668555
PRP0.18739739391
MPRP0.23367048462
TPRP0.35886389658
DFPB10.6575371851142
DFPB20.21325640422
MHS0.41444662803
P420000MPPRP-M0.12576017108
MPPRP-W0.25677231203
MSDFPB0.70306179730
PRP0.55805052565
MPRP0.66344459635
TPRP0.822848100833
DFPB11.4420032031352
DFPB20.56577151596
MHS0.67428160724
P450000MPPRP-M0.29536517108
MPPRP-W0.96175738273
MSDFPB2.9726411021109
PRP2.65560173943
MPRP2.738925811007
TPRP3.2098231221207
DFPB14.7831782341771
DFPB22.64418975981
MHS6.1061101582717
P4100000MPPRP-M0.76897817108
MPPRP-W3.11388745357
MSDFPB10.5755051291568
PRP9.1647011011406
MPRP9.7565611081465
TPRP11.1510341461651
DFPB116.8128712752301
DFPB29.292510991423
MHS21.7926511793305
P5100MPPRP-M0.0122531877
MPPRP-W0.01854062555
MSDFPB0.0497021291522
PRP0.0332901441558
MPRP0.0330171391560
TPRP0.0347581281518
DFPB10.0263131501601
DFPB20.0294661311538
MHS0.0401651331539
P5500MPPRP-M0.0164472293
MPPRP-W0.2313995449066
MSDFPB0.772128131325232
PRP0.727796133225274
MPRP0.585951137225394
TPRP0.577368131525238
DFPB10.578794133725315
DFPB20.589438131725247
MHS0.595316131525241
P51000MPPRP-M0.0174382397
MPPRP-W1.072332150529740
MSDFPB3.218997368081797
PRP2.889154369681825
MPRP2.963704373581946
TPRP2.948045368281802
DFPB13.040253370481880
DFPB23.045603368381803
MHS3.220028367881791
P52000MPPRP-M0.02161325106
MPPRP-W5.944706420695732
MSDFPB17.22715310341260347
PRP16.07336310355260379
MPRP16.96097310394260490
TPRP16.82690710343260352
DFPB116.25710410364260424
DFPB216.73938510345260356
MHS17.50325210339260342
P6100MPPRP-M0.21722411823717
MPPRP-W0.19315311483605
MSDFPB0.23779311863706
PRP0.22038912073766
MPRP0.22421111863658
TPRP0.24810411953722
DFPB10.23680412013750
DFPB20.23547011703620
MHS0.23625011983720
P6500MPPRP-M0.94482813485050
MPPRP-W0.79455512784482
MSDFPB0.79783211853968
PRP0.91193413224267
MPRP0.84533612583983
TPRP0.91234313264335
DFPB10.90385213214300
DFPB21.02022812994143
MHS0.91861113074147
P61000MPPRP-M2.33740514956667
MPPRP-W1.68027513814951
MSDFPB1.53690413314320
PRP1.85128213894553
MPRP1.64735612774040
TPRP1.68641812734109
DFPB11.79288613604409
DFPB21.81267813434330
MHS1.77535113414283
P62000MPPRP-M6.738143173711016
MPPRP-W3.25753814404936
MSDFPB3.06590313814524
PRP3.57489413904522
MPRP3.38188913164247
TPRP3.56574113784510
DFPB13.60961913774511
DFPB23.6476629843232
MHS3.60831513624397


ProblemDimMethodCPU-timeNiNf

P1100000MPPRP-M0.4890582063
MPPRP-W2.00581842268
MSDFPB4.56513379717
PRP4.75748087825
MPRP4.56115179796
TPRP4.63198379796
DFPB14.76331182806
DFPB24.61019974776
MHS4.87903179796
P2100000MPPRP-M0.4836322063
MPPRP-W2.25116042268
MSDFPB4.37574579717
PRP4.56143787825
MPRP4.83014379796
TPRP4.56012379796
DFPB14.42755882806
DFPB24.55007174776
MHS4.42899979796
P3100000MPPRP-M1.1874102063
MPPRP-W5.21513746306
MSDFPB10.13640579710
PRP10.80830387816
MPRP10.12477979789
TPRP10.06853579789
DFPB110.37211382799
DFPB29.45889271757
MHS10.13771179789
P4100000MPPRP-M9.1522582531306
MPPRP-W11.3712042601423
MSDFPB13.3747802931837
PRP8.3524631701352
MPRP9.2237061841474
TPRP14.2796753022184
DFPB115.3299793362386
DFPB211.2447252311770
MHS17.7384082162902
P52000MPPRP-M0.01979325106
MPPRP-W7.148874420795776
MSDFPB16.29337410362260980
PRP16.68968510375271376
MPRP16.11290610415271538
TPRP16.00665510364271349
DFPB116.27286810385271442
DFPB217.01800410366271355
MHS19.20045310360271335
P62000MPPRP-M9.722610341515767
MPPRP-W8.031480329312678
MSDFPB8.254000339512524
PRP9.187302329815178
MPRP9.667492336615503
TPRP9.727760336315708
DFPB19.565437343115990
DFPB29.316936332915590
MHS8.893926339314502


ProblemDimMethodCPU-timeNiNf

P11000000MPPRP-M4.8310982269
P21000000MPPRP-M5.1504502269
P31000000MPPRP-M13.9544042681
P41000000MPPRP-M8.40882219120
P51000000MPPRP-M9.01862336154

For simplification of statement, we use the following notations:

Dim: the dimension of test problems.

Ni: the number of iterations.

Nf: the number of function evaluations.

MPPRP-M: the developed algorithm with determined by (19) in this paper.

MPPRP-W: the developed algorithm with generated by (18) in this paper.

MSDFPB: the modified spectral derivative-free projection-based algorithm in [13].

PRP: the PRP conjugate gradient derivative-free projection-based methods in [28].

MPRP: the modified PRP conjugate gradient derivative-free projection-based methods in [10].

TPRP: the two-term PRP conjugate gradient derivative-free projection-based methods in [10].

DFPB1: the steepest descent derivative-free projection-based methods in [14] with search direction as follows:where , , , and .

DFPB2: the steepest descent derivative-free projection-based methods in [14] with in (62) being replaced by

MHS: the MHS-PRP conjugate gradient derivative-free projection-based methods in [15].

From the results in Tables 1, 2, and 3, it follows that our algorithm (MPPRP) outperforms the other seven algorithms, no matter how to choose the initial points (see the italicized results). Especially, it seems to more efficiently solve large-scale test problems. Actually, Table 3 shows that MPPRP-M can solve the first five Problems with dimension of 1000000 in less time, compared with the other algorithms.

In order to further measure the efficiency difference of all the eight algorithms, we calculate the average number of iteration, the average consumed CPU time, and their standard deviations, respectively. In Table 4, A-Ni and Std-Ni stand for the average number of iteration and its standard deviation, respectively. A-CT and Std-CT represent the average consumed CPU time and its standard deviation, respectively. The average number of function evaluation and its standard deviation are denoted by A-Nf and Std-Nf, respectively. Clearly, Std-Ni, Std-Nf, and Std-CT can show robustness of all the algorithms.


Method(A-CT, Std-CT)(A-Ni, Std-Ni)(A-Nf, Std-CT)

MPPRP-M(1.1657, 2.5775)(330.7, 763.88)(1513.5, 3652.3)
MPPRP-W(2.5725, 2.9258)(689.4, 1199.35)(9205.9, 24233.7)
MSDFPB(5.4010, 5.6632)(1244.56, 2638.61)( 23143.6, 66309.9)
PRP(5.3178, 5.5599)(1247.73, 2641.15)(23583.0, 67602.4)
MPRP(5.3717, 5.6475)(1243.36, 2654.63)(23574.3, 67649.2)
TPRP(5.5309, 5.7401)( 1249.63, 2636.94)(23633.4, 67579.6)
DFPB1(5.8718, 6.0331)(1275.9, 2636.85)(23751.2, 67568.5)
DFPB2(5.3586, 5.6541)(1223.93, 2642.11)(23543.4, 67612.0)
MHS(6.2549, 6.7712)(1248.8, 2637.78)(23719.2, 67539.8)


ProblemDimMethodCPU-timeNiNf

P7500MPPRP-M0.907980668420055
MPPRP-W0.823281668420055
MSDFPB1.063594668420055
PRP0.801892669520088
MPRP0.856992668420055
TPRP0.892188668420055
DFPB10.906388669020073
DFPB20.894474668420055
MHS1.045179668420055
P71000MPPRP-M1.742640842425275
MPPRP-W1.787419842425275
MSDFPB2.140070842425275
PRP1.773320843525308
MPRP1.878275842425275
TPRP1.856212842425275
DFPB11.855210843025293
DFPB21.869769842425275
MHS2.085814842425275
P72000MPPRP-M3.7121671061631851
MPPRP-W4.1826441061631851
MSDFPB4.6220781061731855
PRP4.4651091062831888
MPRP4.7164741061731855
TPRP5.7723791061731855
DFPB15.3578381062231870
DFPB24.6635531061631852
MHS5.7378881061731855
P75000MPPRP-M20.7349811441243239
MPPRP-W21.1126041441343243
MSDFPB23.9636171441443252
PRP22.2319071442543283
MPRP23.3814191441443252
TPRP23.8719181441443252
DFPB123.2221971442043269
DFPB223.3009511441443252
MHS25.4324211441443252

The results in Table 4 demonstrate that both of MPPRP-M and MPPRP-W outperform the other seven algorithms.

In the end of this section, we use our algorithm to solve a singular system of equations. The next test problem is a modified version of problem 1.

Problem 20. The elements of are given by The initial point is fixed as .

Note that problem 7 is singular since zero is its solution and the Jacobian matrix is singular. We implement Algorithm 1 to solve problem 7 with different dimensions to test whether it can find the singular solution or not.

The results in Table 5 indicate that Algorithm 1 is also efficient to solve the singular system of equations.

5. Preliminary Application in Compressed Sensing

In this section, we will apply our algorithm to solve an engineering problem originated from compressed sensing of sparse signals.

Let be a linear operator, let be a sparse or a nearly sparse original signal, and let be an observed value which satisfies the following linear equations:The original signal is desirable to be reconstructed from the linear equations . Unfortunately, it is often that this linear equation is underdetermined or ill-conditioned in practice, and has infinitely many solutions. From the fundamental principles of compressed sensing, it is a reasonable approach to seek the sparsest one among all the solutions, which contains the least nonzero components.

In [2], it was shown that the compressed sensing of sparse signals can be formulated the following nonlinear system of equations:where where , , and , for all with , and is an -dimensional vector with all elements being one. Clearly, (66) is nonsmooth.

Implement Algorithm 1 to solve the compressed sensing problem of sparse signals with different compressed ratios. In this experiment, we consider a typical compressive sensing scenario, where the goal is to reconstruct an -length sparse signal from observations. We test the three algorithms (two popular algorithms: CGD in [34] and SGCS in [35], and MPPRP-M) under three CS ratios (CS-R): , 0.25, 0.5 corresponding to different numbers of measurements with , respectively. The original sparse signal contains 32(64) randomly nonzero elements. The measurement is disturbed by noise, i.e., , where is the Gaussian noise distributed as with . is the Gaussian matrix generated by commend randn(, ) in Matlab. The quality of restoration is measured by the mean of squared error (MSE) to the original signal ; that is,where is the restored signal. The iterative process starts at the measurement image, i.e., , and terminates when the relative change between successive iterates falls below . It says thatwhere and is chosen as suggested in [36]. The other parameters in this experiment are same as those in the numerical experiments conducted in Section 4. Numerical efficiency is shown in Table 6.


CS-RCGDSGCSMPPRP-M
IterTimeMSEIterTimeMSEIterTimeMSE

0.12510474.97s7.96e-0067523.44s4.40e-0066072.58s4.18e-006
(985)(4.86s)(5.11e-003)(1018)(4.19s)(4.29e-003)886(3.63s)(4.16e-003)
0.252632.05s2.87e-0062411.92s2.75e-0061781.38s2.64e-006
(425)(3.34s)(7.12e-006)(359)(2.84s)(5.70e-006)(274 )( 2.20s)( 5.54e-006)
0.5971.33s2.46e-0061351.75s1.59e-006851.16s1.54e-006
(95)(1.33s)(1.14e-005)(146)(2.03s)( 5.52e-006)(93)(1.30s)(5.37e-006)

Clearly, from the results in Table 6, it follows that, for any CS ratio, MPPRP-M can recover the sparse signals more efficiently without reduction of recovery quality (see the italicized results in Table 6). If the sparsity level 32 is replaced by 64 in the 2048-length original signal, the corresponding results are also presented in Table 6, denoted by .

6. Conclusions

In this paper, we have presented a modified spectral PRP conjugate gradient derivative-free projection-based method for solving the large-scale nonlinear monotonic equations, where the search direction is proved to be sufficiently descent for any line search rule, and the step length is chosen by a line search which can overcome the difficulty of choosing an appropriate weight.

Under mild assumptions, global convergence theory has been established for the developed algorithm. Since our algorithm does not involve computing the Jacobian matrix or its approximation, both information storage and computational cost of the algorithm are lower. That is to say, our algorithm is helpful to solution of large-scale problems. In addition, it has been shown that our algorithm is also applicable to solve a nonsmooth system of equations or a singular one.

Numerical tests have demonstrated that our algorithm outperforms the others by costing less number of function evaluations, less number of iterations, or less CPU time to find a solution with the same tolerance, especially in comparison with some similar algorithms available in the literature. Efficiency of our algorithm has also been shown by reconstructing sparse signals in compressed sensing.

In future research, due to its satisfactory numerical efficiency, the proposed method in this paper can be extended into solving more large-scale nonlinear system of equations from many fields of sciences and engineering.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

We declare that all the authors have no any conflicts of interest about submission and publication of this paper.

Authors’ Contributions

Zhong Wan conceived and designed the research plan and wrote the paper. Jie Guo performed the mathematical modelling and numerical analysis and wrote the paper.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (Grant no. 71671190) and Opening Foundation from State Key Laboratory of Developmental Biology of Freshwater Fish, Hunan Normal University.

References

  1. S. Huang and Z. Wan, “A new nonmonotone spectral residual method for nonsmooth nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 313, pp. 82–101, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  2. Z. Wan, J. Guo, J. Liu, and W. Liu, “A modified spectral conjugate gradient projection method for signal recovery,” Signal, Image and Video Processing, vol. 12, no. 8, pp. 1455–1462, 2018. View at: Publisher Site | Google Scholar
  3. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Siam, New York, NY, USA, 1970. View at: MathSciNet
  4. J. E. Dennis and B. R. Schnabel, “Numerical methods for nonlinear equations and unconstrained optimization,” Classics in Applied Mathematics, p. 16, 1983. View at: Google Scholar
  5. W. Zhou and D. Li, “Limited memory {BFGS} method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007. View at: Google Scholar | MathSciNet
  6. J. M. Martinez, “A family of quasi-Newton methods for nonlinear equations with direct secant updates of matrix factorizations,” SIAM Journal on Numerical Analysis, vol. 27, no. 4, pp. 1034–1049, 1990. View at: Publisher Site | Google Scholar | MathSciNet
  7. G. Fasano, F. Lampariello, and M. Sciandrone, “A truncated nonmonotone Gauss-Newton method for large-scale nonlinear least-squares problems,” Computational Optimization and Applications, vol. 34, no. 3, pp. 343–358, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  8. D. Li and M. Fukushima, “A globally and superlinearly convergent Gauss-Newton-based BFGS method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp. 152–172, 1999. View at: Publisher Site | Google Scholar | MathSciNet
  9. Z. Papp and S. Rapajić, “FR type methods for systems of large-scale nonlinear monotone equations,” Applied Mathematics and Computation, vol. 269, pp. 816–823, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  10. Q. Li and D.-H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis (IMAJNA), vol. 31, no. 4, pp. 1625–1635, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  11. L. Zhang, W. Zhou, and D. H. Li, “A descent modified Polak-Ribiere-Polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis (IMAJNA), vol. 26, no. 4, pp. 629–640, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  12. W. La Cruz, J. M. Martnez, and M. Raydan, “Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,” Mathematics of Computation, vol. 75, no. 255, pp. 1429–1448, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  13. Z. Wan, W. Liu, and C. Wang, “A modified spectral conjugate gradient projection method for solving nonlinear monotone symmetric equations,” Pacific Journal of Optimization. An International Journal, vol. 12, no. 3, pp. 603–622, 2016. View at: Google Scholar | MathSciNet
  14. M. Ahookhosh, K. Amini, and S. Bahrami, “Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations,” Numerical Algorithms, vol. 64, no. 1, pp. 21–42, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  15. Q.-R. Yan, X.-Z. Peng, and D.-H. Li, “A globally convergent derivative-free method for solving large-scale nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 234, no. 3, pp. 649–657, 2010. View at: Publisher Site | Google Scholar | MathSciNet
  16. K. Amini, A. Kamandi, and S. Bahrami, “A double-projection-based algorithm for large-scale nonlinear systems of monotone equations,” Numerical Algorithms, vol. 68, no. 2, pp. 213–228, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  17. Y. Li, Z. Wan, and J. Liu, “Bi-level programming approach to optimal strategy for vendor-managed inventory problems under random demand,” The ANZIAM Journal, vol. 59, no. 2, pp. 247–270, 2017. View at: Publisher Site | Google Scholar | MathSciNet
  18. T. Li and Z. Wan, “New adaptive Barzilar-Borwein step size and its application in solving large scale optimization problems,” The ANZIAM Journal, vol. 61, no. 1, pp. 76–98, 2019. View at: Publisher Site | Google Scholar | MathSciNet
  19. X. J. Tong and L. Qi, “On the convergence of a trust-region method for solving constrained nonlinear equations with degenerate solutions,” Journal of Optimization Theory and Applications, vol. 123, no. 1, pp. 187–211, 2004. View at: Publisher Site | Google Scholar | MathSciNet
  20. K. Levenberg, “A method for the solution of certain non-linear problems in least squares,” Quarterly of Applied Mathematics, vol. 2, pp. 164–168, 1944. View at: Publisher Site | Google Scholar | MathSciNet
  21. D. Marquardt, “An algorithm for least-squares estimation of nonlinear parameters,” SIAM Journal on Applied Mathematics, vol. 11, no. 2, pp. 431–441, 1963. View at: Publisher Site | Google Scholar | MathSciNet
  22. C. Kanzow, N. Yamashita, and M. Fukushima, “Levenberg-Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints,” Journal of Computational and Applied Mathematics, vol. 173, no. 2, pp. 321–343, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  23. J. K. Liu and S. J. Li, “A projection method for convex constrained monotone nonlinear equations with applications,” Computers & Mathematics with Applications, vol. 70, no. 10, pp. 2442–2453, 2015. View at: Publisher Site | Google Scholar
  24. G. Yuan and M. Zhang, “A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015. View at: Publisher Site | Google Scholar | MathSciNet
  25. G. Yuan, Z. Meng, and Y. Li, “A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations,” Journal of Optimization Theory and Applications, vol. 168, no. 1, pp. 129–152, 2016. View at: Publisher Site | Google Scholar | MathSciNet
  26. G. Yuan and W. Hu, “A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations,” Journal of Inequalities and Applications, vol. 2018, no. 1, article 113, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  27. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  28. W. Cheng, “A PRP type method for systems of monotone equations,” Mathematical and Computer Modelling, vol. 50, no. 1-2, pp. 15–20, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  29. M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Springer, Boston, MA, USA, 1998. View at: Publisher Site | Google Scholar | MathSciNet
  30. Z. Wan, Z. Yang, and Y. Wang, “New spectral PRP conjugate gradient method for unconstrained optimization,” Applied Mathematics Letters, vol. 24, no. 1, pp. 16–22, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  31. Y. Ou and J. Li, “A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints,” Applied Mathematics and Computation, vol. 56, no. 1-2, pp. 195–216, 2018. View at: Publisher Site | Google Scholar | MathSciNet
  32. W. Zhou and D. Li, “On the Q-linear convergence rate of a class of methods for monotone nonlinear equations,” Pacific Journal of Optimization, vol. 14, pp. 723–737, 2018. View at: Google Scholar
  33. B. T. Polyak, Introduction to Optimization, Optimization Software, New York, NY, USA, 1987. View at: MathSciNet
  34. Y. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  35. Y. Xiao, Q. Wang, and Q. Hu, “Non-smooth equations based method for l1 problems with applications to compressed sensing,” Nonlinear Analysis. Theory, Methods and Applications, vol. 74, no. 11, pp. 3570–3577, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  36. S. Kim, K. Koh, M. Lustig et al., “A method for large-scale 1-regularized least squares problems with applications in signal processing and statistics,” Technical Report, Dept. of Electrical Engineering, Stanford University, Stanford, Calif, USA, 2007. View at: Google Scholar

Copyright © 2019 Jie Guo and Zhong Wan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

482 Views | 275 Downloads | 5 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.