Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 9919595 | https://doi.org/10.1155/2021/9919595

Hongbo Guan, Sheng Wang, "A Modified Conjugate Gradient Method for Solving Large-Scale Nonlinear Equations", Mathematical Problems in Engineering, vol. 2021, Article ID 9919595, 10 pages, 2021. https://doi.org/10.1155/2021/9919595

A Modified Conjugate Gradient Method for Solving Large-Scale Nonlinear Equations

Academic Editor: Zhifeng Dai
Received26 Mar 2021
Revised24 May 2021
Accepted05 Jun 2021
Published24 Jun 2021

Abstract

In this paper, we propose a modified Polak–Ribière–Polyak (PRP) conjugate gradient method for solving large-scale nonlinear equations. Under weaker conditions, we show that the proposed method is globally convergent. We also carry out some numerical experiments to test the proposed method. The results show that the proposed method is efficient and stable.

1. Introduction

Solving nonlinear equations is an important problem which appears in various models of science and engineering such as computer vision, computational geometry, signal processing, computational chemistry, and robotics. More specifically, the subproblems in the generalized proximal algorithms with Bergman distances is a monotone nonlinear equations [1], and -norm regularized optimization problems can be reformulated as monotone nonlinear equations [2]. Due to its wide applications, the studies in the numerical methods for solving the monotone nonlinear equations have received much attention [310]. In this paper, we are interested in the numerical methods for solving monotone nonlinear equations with convex constraints:where is a continuous function and is a nonempty, closed, and convex set. The monotonicity of the mapping means that

The methods for solving monotone nonlinear equations (1) are closely relevant to the methods for solving the following optimization problems:

Notice that if is strictly convex, then is strictly monotone which means It is well known that the strictly convex function exists a unique solution , satisfying To sum up, if there is a convex function satisfying , then solving the optimization problems (3) is equivalent to solving monotone nonlinear equations (1). So, a natural idea to solve monotone nonlinear equations (1) is to use the existing efficient methods for solving optimization problems (3). There are many methods for solving optimization problems (3), such as the Newton method, quasi-Newton method, trust region method, and conjugate gradient method. Among these methods, the conjugate gradient method is a very effective method for solving optimization problems (3) due to their simplicity and low storage. A conjugate gradient method generates a sequence of iterates:where is the step length and direction is defined bywhere is a parameter and is the gradient of the objective function . The choice of determines different conjugate gradient methods [1117]. We are interested in the PRP conjugate gradient method in which the parameter is defined bywhere . Based on the idea of [18, 19], Zhang et al. [20] proposed a new modified nonlinear PRP method in which is defined bywhere , and are two constants. There is a mistake about the definition of . By this definition, we will not be able to prove Lemma 1 in [20]. It should be

There are many conjugate gradient methods for solving nonlinear equations (1). Zhang and Zhou [4] proposed a spectral gradient method by combining the modified spectral gradient and projection method, which can be applied to solve nonsmooth equations. Xiao and Zhou [10] extended the CG-DESCENT method to solve large-scale nonlinear monotone equations and extended this method to decode a sparse signal in compressive sensing. Dai and Zhu [21] proposed a derivative-free method for solving large-scale nonlinear monotone equations and proposed a new line search for the derivative-free method. Other related works can be found [3, 58, 10, 2230]. In this paper, we combined the projection method [3], the modified nonlinear PRP conjugate gradient method for unconstrained optimization [20] and the iterative method [10] and proposed a modified nonlinear conjugate gradient method for solving large-scale nonlinear monotone equations with convex constrains.

This paper is organized as follows. In Section 2, we propose a modified nonlinear PRP method for solving monotone nonlinear equations with convex constraints. Under reasonable conditions, we prove its global convergence. In Section 3, we make some improvement to the proposed method and give the convergence theorem of the improved method. In Section 4, we do some numerical experiments to test the proposed methods. The results show that our methods are efficient and promising. Furthermore, we use the proposed methods to solve practical problems in compressed sensing.

2. A Modified Nonlinear PRP Method

In this section, we develop a modified nonlinear PRP method for solving the nonlinear equations with convex constraints. Based on the modified nonlinear PRP method [20], we now introduce our method for solving (1). Inspired by (8), we define aswhere The parameter is computed aswhere and are two constants.

The lemma below shows a good property of The steps of the method are given in Algorithm 1.

Lemma 1. Let be generated by Algorithm 1. If , then there exists a constant such that

Proof. For we haveFor we obtainDenoteThen, we obtainLet ; then, inequality (11) is satisfied.
Next, we establish the global convergence of the proposed method. Without specification, we always suppose that the solution set of equation (1) is nonempty and the following assumption holds.

Assumption 1. (i)The mapping is Lipchitz continuous, and it means that the mapping satisfies(ii)The projection operator is nonexpansive, i.e.,

Lemma 2. Suppose that Assumption 1 holds and is a solution of (1), and the sequence and are generated by Algorithm 1. Then, the sequence , , and are bounded.

Initial. Given a small constant and constants . Choose an initial point . Let .
Step 1. Stop if .
Step 2. Compute (9) to get .
Step 3. Let satisfying
denote
Step 4. Compute
where
and is a projection operator, defined by .
Step 5. Let and go to Step 1.

Proof. We first show that is bounded. From the monotonicity of function , we haveIt is easy to see thatThe last inequality impliesIt obviously that the sequences is bounded, i.e., there is a constant such thatNext, we show that is bounded. Since is Lipchitz continuous, we obtainLet ; then, is bounded. That is,At last, we prove that the sequence is bounded. From (2) and (23) and Algorithm 1, we obtainSo, the following inequality holds:It implies that the sequence is bounded.

Lemma 3. Suppose that Assumption 1 holds, and the sequence and is generated by Algorithm 1. Then, there exists a constant such that

Proof. We first prove the right side of inequality (26). For , from (9), we haveFor , by the definition of and (21), we obtainBy the definition of (9) and the last inequality, we obtainLet ; then, we have
Now, we turn to prove the left side of the inequality. It follows from (11) thatTherefore, we have

Lemma 4. Suppose Assumption 1 holds; then, the step length satisfies

Proof. If the Algorithm 1 terminates in a finite number of steps, then there is a such that is a solution of equation (1) and . From now on, we assume that , for any . It is easy to see that from (11).
If , by the line search process, we know that does not satisfy Algorithm 1, that is,It follows from (11) and Assumption 1 thatFrom the last inequality and Lemma 3, we obtainHence, it holds that

Lemma 5. Suppose that Assumption 1 holds, and the sequence and are generated by Algorithm 1. Then, we have the following:

Proof. Noticed thatSince the function is continuous, and the sequence is bounded, so the sequence is bounded. That is, for all , there exists a positive constant , such that Then, we obtainSo, we have

Theorem 1. Suppose that Assumption 1 holds. The sequence is generated by Algorithm 1. Then, we have

Proof. Suppose that (41) does not hold; then, there exists such that, for any From (26) and the last inequality, it is easy to seeFrom (41) and (42), we obtainThe last inequality yields a contradiction with (37), so (41) is satisfied.

3. An Improvement

In this section, we make some improvement to the modified nonlinear PRP method proposed in Section 2. In Algorithm 1, we take the step length . Is there a better choice for ? This is our purpose to improve Algorithm 1. Under the condition of ensuring the convergence of the algorithm and the related good properties and results, we improve Algorithm 1 in order to get better numerical results.

From Algorithm 1, to make the inequality hold, we only need to satisfy

By solving the last inequality, we have

It is easy to see that is the minimum point of the function . This is the reason why Algorithm 1 takes . Under reasonable conditions, we hope to get a large step length than Algorithm 1. So, we obtain

Based on the above arguments, we propose an improved algorithm of Algorithm 1. In the improved algorithm, we make the step length:

Similar to the proof of Theorem 1, we have the following results.

Theorem 2. Suppose that Assumption 1 holds. The sequence is generated by Algorithm 2; then, we haveThe iterative process of the improved method is stated as follows.

Initial. Given a small constant and constants . Choose an initial point . Let .
Step 1. Stop if .
Step 2. Compute (9) to get .
Step 3. Let satisfying
denote
Step 4. Compute
where
Step 5. Let and go to Step 1.

4. Numerical Results

In this section, we do some numerical experiments to test the performance of the proposed methods. We implemented our methods in MATLAB R2020b and run the codes on a personal computer with 2.3 GHz CPU and 16 GB RAM.

We first solve Problems 1 and 2.

Problem 1. (see [8]). The mapping is taken as where

Problem 2. (see [4]). The mapping is taken as whereand
We set The stopping criterion of the algorithm is set to or the number of iteration reach to 500. The latter case means that the method is a failure for the test problems. We test both problems with the dimensions of variables , and . Start from different initial points, and we list all results in Tables 1 and 2. We compared the performance of the proposed methods with the classical Newton method and an efficient algorithm CGD [10] in the total number of iterations as well as the computational time. The meaning of each column is given below.‘Init’: the initial point’: the dimension of the problem‘Iter’: the total number of iterations‘Time’: the CPU time (in seconds) used for the method‘NM’: the Newton method‘CGD’: the conjugate gradient method in [10]‘MNPRP’: the modified nonlinear PRP method‘IMNPRP’: the improved modified nonlinear PRP methodThe results in Tables 1 and 2 show that our methods performs very well both in the number of iterations and CPU time. The IMNPRP performs best among these methods. It is worth noting that the number of iterations does not increase significantly as n increases. Hence, the proposed method is very suitable for solving large-scale problems. Because of the lack of memory, the dimension of the problems solved by the Newton method is no more than 100,000.
The following example is a signal reconstruction problem from compressed sensing.


InitnNMCGDMNPRPIMNPRP
IterTimeIterTimeIterTimeIterTime

100050.0345100.000640.000310.0002
1000052.0438110.003040.001510.0009
100000Out of memory110.022940.011910.0073
1000000Out of memory120.400250.159510.0921
10000000Out of memory134.111251.746810.9706

100060.0325120.000550.000410.0003
1000062.4640130.004150.003010.0023
100000Out of memory130.029960.029910.0175
1000000Out of memory140.454760.283810.1970
10000000Out of memory155.140073.403112.0876

100040.0226100.000450.000210.0001
1000041.6051110.003150.001210.0006
100000Out of memory120.024750.008810.0040
1000000Out of memory120.375760.136810.0531
10000000Out of memory134.118461.510710.5842

100050.0335100.000440.000210.0001
1000052.0122110.002740.001510.0010
100000Out of memory110.018740.011810.0071
1000000Out of memory120.379750.157510.0882
10000000Out of memory134.221751.763811.0510


InitnNMCGDMNPRPIMNPRP
IterTimeIterTimeIterTimeIterTime

100030.0187110.000640.000210.0001
1000031.1675120.003340.000910.0004
100000Out of memory120.024540.005810.0028
1000000Out of memory130.415650.103610.0350
10000000Out of memory144.359551.137910.3650

100040.0207110.000550.000310.0002
1000041.6028120.003650.001610.0009
100000Out of memory120.031260.012410.0061
1000000Out of memory130.415360.162810.0771
10000000Out of memory144.527561.794010.8300

100030.0226110.000940.000210.0001
1000031.1711110.003040.000710.0002
100000Out of memory120.024840.004910.0015
1000000Out of memory130.399850.093610.0236
10000000Out of memory144.380150.991110.2697

100030.0188110.000540.000210.0001
1000031.1775110.002940.000810.0003
100000Out of memory120.023440.004410.0023
1000000Out of memory130.396440.082810.0320
10000000Out of memory144.269151.149110.3430

Problem 3. (see [10]). Consider a typical compressive sensing scenario, where we aim to reconstruct a length-n sparse signal form m observations . In this test, the measurement b contains noise:where is the Gaussian noise distributed as with . The random is the Gaussian matrix which is generated by command randn (m, n) in Matlab. The merit functionwhere the value τ is forced to decrease as the measure in [31]. The iterative process starts at the measurement image, i.e., , and terminates when the relative change between successive iterates falls below , i.e.,where denotes the function value at . By the discussion in [10], we know that the -norm problem can transformed a monotone nonlinear equation. Hence, it can be solved by Algorithms 1 and 2.
Due to the storage limitations of the PC, we test a small size signal with , and the original contains randomly nonzero elements. The quality of restoration is measured by the mean of squared error (MSE) to the original signal , that is,where is the restored signal. We take the parameters , and in CGD, MNPRP, and IMNPRP.
In order to test the effectiveness of the proposed methods, we compare the proposed methods with the CGD method [10] and the solver SGCS which is specially designed to solve monotone equations for recovering a large sparse signal in compressive sensing. The results are listed in Figures 1 and 2.
It can be seen from Figures 1 and 2 that all methods have recovered the original sparse signal almost exactly. Among these methods, the IMNPRP method performs best.

5. Conclusions

In this paper, a modified conjugate gradient method and its improved method are proposed for solving the large-scale nonlinear equations. Under some assumptions, global convergence of the proposed methods are established. Numerical results show that the proposed methods are very efficient and competitive.

Data Availability

All data generated or analysed during this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was funded by the Education Department of Hunan Province (Grant no. 20C0559).

References

  1. N. A. Iusem and V. M. Solodov, “Newton-type methods with generalized distances for constrained optimization,” Optimization, vol. 41, no. 3, pp. 257–278, 1997. View at: Publisher Site | Google Scholar
  2. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 586–597, 2007. View at: Publisher Site | Google Scholar
  3. M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing-Methods, pp. 355–369, Kluwer Academic Publishers, Dordrecht,The Netherlands, 1998. View at: Publisher Site | Google Scholar
  4. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006. View at: Publisher Site | Google Scholar
  5. G. Zhou and K. C. Toh, “Superlinear convergence of a Newton-type Algorithm for monotone equations,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 205–221, 2005. View at: Publisher Site | Google Scholar
  6. W.-J. Zhou and D.-H. Li, “A globally convergent BFGS method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008. View at: Publisher Site | Google Scholar
  7. W. Zhou and D. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, pp. 89–96, 2007. View at: Google Scholar
  8. C. Wang, Y. Wang, and C. Xu, “A projection method for a system of nonlinear monotone equations with convex constraints,” Mathematical Methods of Operations Research, vol. 66, no. 1, pp. 33–46, 2007. View at: Publisher Site | Google Scholar
  9. Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, and Z. Li, “Spectral gradient projection method for monotone nonlinear equations with convex constraints,” Applied Numerical Mathematics, vol. 59, no. 10, pp. 2416–2423, 2009. View at: Publisher Site | Google Scholar
  10. Y. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013. View at: Publisher Site | Google Scholar
  11. E. Polak and G. Ribiere, “Note sur la convergence de méthodes de directions conjuguées,” Revue française d'informatique et de recherche opérationnelle. Série rouge, vol. 3, no. 16, pp. 35–43, 1969. View at: Publisher Site | Google Scholar
  12. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Publisher Site | Google Scholar
  13. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site | Google Scholar
  14. Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms, part 1: theory,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 129–137, 1991. View at: Publisher Site | Google Scholar
  15. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at: Publisher Site | Google Scholar
  16. R. Fletcher and C. M. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site | Google Scholar
  17. R. Fletcher, “Practical methods of optimization,” in Unconstrained Optimization, Wiley, New York, NJ, USA, 1980. View at: Google Scholar
  18. G. Yu, L. Guan, and W. Chen, “Spectral conjugate gradient methods with sufficient descent property for large-scale unconstrained optimization,” Optimization Methods and Software, vol. 23, no. 2, pp. 275–293, 2008. View at: Publisher Site | Google Scholar
  19. M. Li and A. Qu, “Some sufficient descent conjugate gradient methods and their global convergence,” Computational and Applied Mathematics, vol. 33, no. 2, pp. 333–347, 2014. View at: Publisher Site | Google Scholar
  20. M. Zhang, Y. Zhou, and S. Wang, “A modified nonlinear conjugate gradient method with the armijo line search and its application,” Mathematical Problems in Engineering, vol. 2020, Article ID 6210965, 14 pages, 2020. View at: Publisher Site | Google Scholar
  21. Z. Dai and H. Zhu, “A modified hestenes-stiefel-type derivative-free method for large-scale nonlinear monotone equations,” Mathematics, vol. 8, no. 168, 2020. View at: Publisher Site | Google Scholar
  22. N. Andrei, “A scaled BFGS preconditioned conjugate gradient algorithm for unconstrained optimization,” Applied Mathematics Letters, vol. 20, no. 6, pp. 645–650, 2007. View at: Publisher Site | Google Scholar
  23. L. Zheng, L. Yang, and Y. Liang, “A conjugate gradient projection method for solving equations with convex constraints,” Journal of Computational and Applied Mathematics, vol. 375, Article ID 112781, 2020. View at: Publisher Site | Google Scholar
  24. A. H. Ibrahim, P. Kumam, A. B. Abubakar, W. Jirakitpuwapat, and J. Abubakar, “A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing,” Heliyon, vol. 6, no. 3, Article ID e03466, 2020. View at: Publisher Site | Google Scholar
  25. A. M. Awwal, P. Abubakar, and A. B. Abubakara, “A modified conjugate gradient method for monotone nonlinear equations with convex constraints,” Applied Numerical Mathematics, vol. 145, pp. 507–520, 2019. View at: Publisher Site | Google Scholar
  26. W. La Cruz, J. M. Martnez, and M. Raydan, “Spectral residual method without gradient information for solving large-scale nonlinear systems,” Mathematics of Computation, vol. 75, pp. 1449–1466, 2006. View at: Publisher Site | Google Scholar
  27. W. L. Cruz, “A spectral algorithm for large-scale systems of nonlinear monotone equations,” Numerical Algorithms, vol. 76, no. 4, pp. 1109–1130, 2017. View at: Publisher Site | Google Scholar
  28. Z. Dai, H. Zhou, J. Kang et al., “The skewness of oil price returns and equity premium predictability,” Energy Economics, vol. 94, Article ID 105069, 2021. View at: Publisher Site | Google Scholar
  29. Z. Dai and J. Kang, “Some new efficient mean-variance portfolio selection models,” International Journal of Finance and Economics, vol. 7, pp. 1–13, 2021. View at: Google Scholar
  30. Z. F. Dai and H. Zhu, “Stock return predictability from a mixed model perspective,” Pacific-Basin Finance Journal, vol. 60, Article ID 101267, 2020. View at: Publisher Site | Google Scholar
  31. M. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, pp. 586–597, 2008. View at: Google Scholar

Copyright © 2021 Hongbo Guan and Sheng Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views190
Downloads308
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.