Research Article | Open Access
Auwal Bala Abubakar, Kanikar Muangchoo, Abdulkarim Hassan Ibrahim, Sunday Emmanuel Fadugba, Kazeem Olalekan Aremu, Lateef Olakunle Jolaoso, "A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations", Journal of Mathematics, vol. 2021, Article ID 5549878, 9 pages, 2021. https://doi.org/10.1155/2021/5549878
A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations
This paper proposes a modified scaled spectral-conjugate-based algorithm for finding solutions to monotone operator equations. The algorithm is a modification of the work of Li and Zheng in the sense that the uniformly monotone assumption on the operator is relaxed to just monotone. Furthermore, unlike the work of Li and Zheng, the search directions of the proposed algorithm are shown to be descent and bounded independent of the monotonicity assumption. Moreover, the global convergence is established under some appropriate assumptions. Finally, numerical examples on some test problems are provided to show the efficiency of the proposed algorithm compared to that of Li and Zheng.
We desire in this work to propose an algorithm to solve the problem:where is monotone and Lipschitz continuous and is nonempty, closed, and convex.
Solving problems of form (1) are becoming interesting in recent years due to its appearance in many areas of science, engineering, and economy, for example, in forecasting of financial market , constrained neural networks , economic and chemical equilibrium problems [3, 4], signal and image processing [5, 6], phase retrieval [7, 8], power flow equations , nonnegative matrix factorisation [10, 11], and many more.
Some notable methods for finding solution to (1) are: Newton’s method, quasi-Newton method, Gauss–Newton method, Levenberg–Marquardt method, and their variants [12–15]. These methods are prominent due to their fast convergence property. However, their convergence is local, and they require computing and storing of the Jacobian matrix at each iteration. In addition, there is a need to solve a linear equation at each iteration. These and other reasons make them unattractive especially for large-scale problems. To avoid the above drawbacks, methods that are globally convergent and also do not require computing and storing of the Jacobian matrix were introduced. Examples of such methods are the spectral (SG) and conjugate (CG) gradient methods. However, SG and CG methods for solving (1) are usually combined with the projection method proposed in . For instance, Zhang and Zhou  extended the work of Birgin and Martinéz  for unconstrained optimization problems by combining it with the projection method and proposed a spectral gradient projection-based algorithm for solving (1). Dai et al.  extend the modified Perry’s CG method  for solving unconstrained optimization problem to solve (1) by combining it with the projection method. Liu and Li  incorporated the Dai-Yuan (DY)  CG method with the projection method and proposed a spectral Dai-Yuan (SDY) projection method for solving nonlinear monotone equations. The method was shown to be globally convergent under appropriate assumptions. Furthermore, to popularize and boost the efficiency of the DY CG method, Liu and Feng  proposed a spectral DY-type CG projection method (PDY), where the spectral parameter is derived such that the direction is descent. It is worth mentioning that all the methods mentioned above require the operator in (1) to be monotone. Recently, Li and Zheng  proposed scaled three-term derivative-free methods for solving (1). The method is an extension of the method proposed by Bojari and Eslahchi . However, to establish the convergence of the method, Li and Zheng assume that the operator is uniformly monotone which is a stronger condition. Some other related ideas on spectral gradient-type and spectral conjugate gradient-type methods for finding solution to (1) were studied in [26–41] and references therein.
In this work, motivated by the strong condition imposed on the operator by Li and Zheng , we seek to relax the condition on the operator from uniformly monotone to monotone. This is achieved by modifying the two search directions defined by Li and Zheng. In addition, the global convergence is established under the assumption that the operator is monotone and Lipschitz continuous. Numerical examples to support the theoretical results are also given.
Notations: unless or otherwise stated, the symbol stands for Euclidean norm on . is abbreviated to . Furthermore, is the projection mapping from onto given by , for a nonempty closed and convex set .
2. Motivation and Algorithm
In this section, we will begin by recalling a three-term spectral-conjugate gradient method for solving (1). Given an initial point , the method generates a sequence via the following formula:where and are the current and previous points, respectively. is the stepsize obtained via a line search and is the search direction defined aswhere , , and are parameters and .
Based on the three-term direction above, we will propose a modified scaled three-term derivative-free algorithms for solving (1). The algorithms are a modification of the two algorithms proposed by Li and Zheng . The aim of the modification is to relax the uniformly monotone assumption on the operator. The search directions defined in  were shown to be bounded under the uniformly monotone assumption. Our main interest is to modify the search directions defined in  and prove their boundedness without requiring the uniformly monotone assumption. The directions in  are defined as follows: STDF1: STDF2:where
To obtain a lower bound for the term , Li and Zheng used the uniformly monotone assumption. So, in order to relax this condition, we replace the term in the directions defined by (4) and (5) with . In addition, we replace and in (4) and (5) with , in (5) with . Hence, we define the new directions as follows: PSTDF1: PSTDF2:where
Assumption 1. The constraint set is nonempty, closed, and convex.
Assumption 2. The operator is monotone, that is, :
Algorithm 1. PSTDF. Input. Choose an initial guess , , , , , , and . Step 1. If , terminate. Else move to Step 2. Step 2. Compute using (7) or (8). Step 3. Compute , for , where is the least nonnegative integer satisfying Step 4. If and , then stop. Else, compute where Step 5. Let and repeat from Step 1.
3. Theoretical Results
In this section, we will establish the convergence analysis of the proposed algorithm. However, we require the following important lemmas. The following lemma shows that the proposed directions are descent.
Proof. Multiplying both sides of (7) by , we haveAlso, multiplying both sides of (8) by , we haveHence, for all , the directions defined by (7) and (8) satisfyThe lemma below shows that the linesearch (14) is well-defined and the stepsize is bounded away from zero.
Lemma 2 (see ). Suppose Assumptions 1–3 are satisfied. If , , and are sequences defined by (7), (13), and (15), respectively, then(i)For all , there is satisfying (14) for some and .(ii) obtained via (14) satisfies
Remark 2. Since is bounded from Lemma 3 and is continuous from Assumption 3, is also bounded. That is, there exists such that, for all ,All are now set to establish the convergence of the proposed algorithm.
Proof. Suppose that , then there is a positive constant such that, for all ,By (17), (18), and the Cauchy–Schwartz inequality, we have that, for all ,To complete the proof of the theorem, we need to show that the search direction defined by (7) and (8) are bounded.
For , we haveNow for , using (7), (10), (12), and (26), we haveSimilarly, from (8),Letting and , then for all ,since .
Multiplying (20) by , we getThis contradicts (21) and hence .
Because is a continuous function and (24) holds, then the sequence has some accumulation point say for which , that is, is a solution of (1). From (22), it holds that converges, and since is an accumulation point of , converges to .
4. Numerical Examples on Monotone Operator Equations
This segment of the paper would demonstrate the computational efficiency of the PSTDF algorithm relative to STDF algorithm . For PSTDF algorithm, we have PSTDF1 which corresponds to the direction defined by (7) and PSTDF2 corresponding to the one defined by (8). Similarly, for the STDF algorithm, we have STDF1 and STDF2 corresponding to (4) and (5), respectively. The parameters chosen for the implementation of the PSTDF algorithm are , and . The parameters for STDF algorithm are chosen as reported in . The metrics considered are the number of iteration (NOI), number of function evaluations (NFE), and the CPU time (TIME). We used eight test problems with dimension , 5000, , , and and five initial points . The algorithms were coded in MATLAB R2019a and run on a PC with Intel (R) Core (TM) i3-7100U processor with 8 GB RAM and CPU 2.40 GHz. The iteration process is stopped whenever . Failure is declared if this condition is not satisfied after 1000 iterations.
Table 1 consists of the test problems considered, where the function is and .
The result of the experiments in Tabular form can be found in the link https://documentcloud.adobe.com/link/review?uri=urn:aaid:scds:US:77a9a900-2156-4344-a9d9-b42e3a3dc8e5. It can be observed from the results that the algorithms successfully solved all the problems considered without a single failure. However, to better illustrate the performance of each algorithm, we employ the Dolan and Moré  performance profiles and plot Figures 1–3. Figures 1–3 represent the performance of the algorithms based on NOI, NFE, and TIME, respectively. In terms of NOI (Figure 1), the best performing algorithm is PSTDF2 with success, followed by PSTDF1 with success. STDF1 and STDF2 record less than success each. Based on NFE (Figure 2), the best performing algorithm is PSTDF1 with around success, followed by PSTDF2 with almost success. STDF1 and STDF2 record and around success, respectively. Lastly, in terms of TIME (Figure 3), PSTDF2 performs better with around success, followed by PSTDF1 with more than success. STDF1 and STDF2 record around and success, respectively. Overall, we can conclude that PSTDF1 and PSTDF2 outperform STDF1 and STDF2 based on the metrics considered.
In this paper, a modified scaled algorithm based on the spectral-conjugate gradient method for solving nonlinear monotone operator equations was proposed. The algorithm replaces the stronger assumption of uniformly monotone on the operator in the work of Li and Zheng (2020) with just monotone, which is weaker. Interestingly, the search directions were shown to be descent independent of line search and also without monotonicity assumption (unlike in the work of Li and Zheng). Furthermore, the convergence results were established under monotonicity and Lipschitz continuity assumptions on the operator. Numerical experiments on some benchmark problems were conducted to illustrate the good performance of the proposed algorithm.
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
All authors contributed equally to the manuscript and read and approved the final manuscript.
The first, fifth, and the sixth authors acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University. The second author was financially supported by the Rajamangala University of Technology Phra Nakhon (RMUTP) Research Scholarship.
- Z. Dai, X. Dong, J. Kang, and L. Hong, “Forecasting stock market returns: new technical indicators and two-step economic constraint method,” The North American Journal of Economics and Finance, vol. 53, Article ID 101216, 2020.
- C. Jan and J. M. Zurada, “Learning understandable neural networks with nonnegative weight constraints,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 1, pp. 62–69, 2014.
- S. P. Dirkse and M. C. Ferris, “Mcplib: a collection of nonlinear mixed complementarity problems,” Optimization Methods and Software, vol. 5, no. 4, pp. 319–345, 1995.
- M. Keith and A. P. Morgan, “A methodology for solving chemical equilibrium systems,” Applied Mathematics and Computation, vol. 22, no. 4, pp. 333–361, 1987.
- A. B. Abubakar, P. Kumam, and H. Mohammad, “A note on the spectral gradient projection method for nonlinear monotone equations with applications,” Computational and Applied Mathematics, vol. 39, no. 129, 2020.
- A. B. Abubakar, P. Kumam, H. Mohammad, and A. M. Awwal, “A Barzilai-Borwein gradient projection method for sparse signal and blurred image restoration,” Journal of the Franklin Institute, vol. 357, no. 11, pp. 7266–7285, 2020.
- E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: theory and algorithms,” IEEE Transactions on Information Theory, vol. 61, no. 4, pp. 1985–2007, 2015.
- H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, “A nonconvex approach for phase retrieval: reshaped wirtinger flow and incremental algorithms,” The Journal of Machine Learning Research, vol. 18, no. 141, pp. 1–35, 2017.
- A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power Generation, Operation, and Control, John Wiley & Sons, Hoboken, NJ, USA, 2013.
- M. W. Berry, M. Browne, A. N. Langville, V. P. Pauca, and R. J. Plemmons, “Algorithms and applications for approximate nonnegative matrix factorization,” Computational Statistics & Data Analysis, vol. 52, no. 1, pp. 155–173, 2007.
- D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of the 13th International Conference on Neural Information Processing System, pp. 556–562, Cambridge, MA, USA, Janauary 2000.
- J. E. Dennis and J. J. Moré, “A characterization of superlinear convergence and its application to quasi-Newton methods,” Mathematics of Computation, vol. 28, no. 126, 549 pages, 1974.
- D. Li and M. Fukushima, “A globally and superlinearly convergent gauss--newton-based BFGS method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp. 152–172, 1999.
- G. Zhou and K. C. Toh, “Superlinear convergence of a Newton-type algorithm for monotone equations,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 205–221, 2005.
- W.-J. Zhou and D.-H. Li, “A globally convergent bfgs method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008.
- M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Springer, Berlin, Germany, 1998.
- L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006.
- E. G. Birgin and J. M. Martinez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001.
- Z. Dai, X. Chen, and F. Wen, “A modified Perry’s conjugate gradient method-based derivative-free method for solving large-scale nonlinear monotone equations,” Applied Mathematics and Computation, vol. 270, pp. 378–386, 2015.
- I. E. Livieris and P. Pintelas, “Globally convergent modified Perry’s conjugate gradient method,” Applied Mathematics and Computation, vol. 218, no. 18, pp. 9197–9207, 2012.
- J. Liu and S. Li, “Spectral DY-type projection method for nonlinear monotone systems of equations,” Journal of Computational Mathematics, vol. 33, no. 4, pp. 341–355, 2015.
- Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999.
- J. Liu and Y. Feng, “A derivative-free iterative method for nonlinear monotone equations with convex constraints,” Numerical Algorithms, vol. 82, no. 1, pp. 245–262, 2019.
- Q. Li and B. Zheng, “Scaled three-term derivative-free methods for solving large-scale nonlinear monotone equations,” Numerical Algorithms, vol. 1–25, 2020.
- S. Bojari and M. R. Eslahchi, “Two families of scaled three-term conjugate gradient methods with sufficient descent property for nonconvex optimization,” Numerical Algorithms, vol. 83, no. 3, pp. 901–933, 2020.
- A. B. Abubakar, P. Kumam, and A. M. Awwal, “A descent dai-liao projection method for convex constrained nonlinear monotone equations with applications,” Thai Journal of Mathematics, vol. 17, no. 1, pp. 128–152, 2018.
- A. B. Abubakar, P. Kumam, A. H. Ibrahim, and J. Rilwan, “Derivative-free HS-DY-type method for solving nonlinear equations and image restoration,” Heliyon, vol. 6, no. 11, Article ID e05400, 2020.
- A. B. Abubakar, P. Kumam, M. Hassan, and A. H. Ibrahim, “Prp-like algorithm for monotone operator equations,” Japan Journal of Industrial and Applied Mathematics, pp. 1–18, 2021.
- A. B. Abubakar, K. Muangchoo, A. H. Ibrahim, J. Abubakar, and S. A. Rano, “Fr-type algorithm for finding approximate solutions to nonlinear monotone operator equations,” Arabian Journal of Mathematics, pp. 1–10, 2021.
- A. B. Abubakar, K. Muangchoo, A. H. Ibrahim, A. B. Muhammad, L. O. Jolaoso, and K. O. Aremu, “A new three-term hestenes-stiefel type method for nonlinear monotone operator equations and image restoration,” IEEE Access, vol. 9, pp. 18262–18277, 2021.
- A. H. Ibrahim, P. Kumam, A. B. Abubakar, J. Abubakar, and A. B. Muhammad, “Least-square-based three-term conjugate gradient projection method for -norm problems with application to compressed sensing,” Mathematics, vol. 8, no. 4, p. 602, 2020.
- A. H. Ibrahim, P. Kumam, A. B. Abubakar, W. Jirakitpuwapat, and J. Abubakar, “A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing,” Heliyon, vol. 6, no. 3, Article ID e03466, 2020.
- A. H. Ibrahim, P. Kumam, A. B. Abubakar, U. B. Yusuf, and J. Rilwan, “Derivative-free conjugate residual algorithms for convex constraints nonlinear monotone equations and signal recovery,” Journal of Nonlinear and Convex Analysis, vol. 21, no. 9, pp. 1959–1972, 2020.
- A. H. Ibrahim, P. Kumam, A. B. Abubakar, U. B. Yusuf, S. E. Yimer, and K. O. Aremu, “An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration,” AIMS Mathematics, vol. 6, no. 1, p. 235, 2020.
- A. H. Ibrahim, K. Muangchoo, A. B. Abubakar, A. D. Adedokun, and H. Mohammed, “Spectral conjugate gradient like method for signal reconstruction,” Thai Journal of Mathematics, vol. 18, no. 4, pp. 2013–2022, 2020.
- A. Hassan Ibrahima, K. Muangchoob, N. S. Mohamedc, and A. B. Abubakard, “Derivative-free smr conjugate gradient method for con-straint nonlinear equations,” Journal of Mathematics and Computer Science, vol. 24, no. 2, pp. 147–164, 2022.
- L. Liu, “Shrinking projection method for solving zero point and fixed point problems in banach spaces,” Journal of Nonlinear and Variational Analysis, vol. 4, no. 3, pp. 439–454, 2020.
- A. Mayowa and G. Igor, “Augmented Lagrangian fast projected gradient algorithm with working set selection for training support vector machines,” Journal of Applied and Numerical Optimization, vol. 3, no. 3–20, 2021.
- H. Mohammad and A. B. Abubakar, “A positive spectral gradient-like method for large-scale nonlinear monotone equations,” Bulletin of Computational and Applied Mathematics, vol. 5, no. 1, pp. 99–115, 2017.
- H. Mohammad and A. Bala Abubakar, “A descent derivative-free algorithm for nonlinear monotone equations with convex constraints,” RAIRO-Operations Research, vol. 54, no. 2, pp. 489–505, 2020.
- O. K. Oyewole and O. T. Mewomo, “A subgradient extragradient algorithm for solving split equilibrium and fixed point problems in reflexive banach spaces,” Journal of Nonlinear Functional Analysis, vol. 19, p. 2020, 2020.
- W. La Cruz, J. M. Martínez, and M. Raydan, “Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,” Mathematics of Computation, vol. 75, no. 255, pp. 1429–1449, 2006.
- W. Zhou and D. H. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007.
- B. Yang and L. Gao, “An efficient implementation of merrill’s method for sparse or partially separable systems of nonlinear equations,” SIAM Journal on Optimization, vol. 1, no. 2, pp. 206–221, 1991.
- Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, and Z. Li, “Spectral gradient projection method for monotone nonlinear equations with convex constraints,” Applied Numerical Mathematics, vol. 59, no. 10, pp. 2416–2423, 2009.
- Y. Ding, Y. Xiao, and J. Li, “A class of conjugate gradient methods for convex constrained monotone equations,” Optimization, vol. 66, no. 12, pp. 2309–2328, 2017.
- E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002.
Copyright © 2021 Auwal Bala Abubakar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.