Journal of Mathematics

Journal of Mathematics / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 5549878 | https://doi.org/10.1155/2021/5549878

Auwal Bala Abubakar, Kanikar Muangchoo, Abdulkarim Hassan Ibrahim, Sunday Emmanuel Fadugba, Kazeem Olalekan Aremu, Lateef Olakunle Jolaoso, "A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations", Journal of Mathematics, vol. 2021, Article ID 5549878, 9 pages, 2021. https://doi.org/10.1155/2021/5549878

A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations

Academic Editor: Jen-Chih Yao
Received13 Jan 2021
Revised11 Apr 2021
Accepted13 Apr 2021
Published26 Apr 2021

Abstract

This paper proposes a modified scaled spectral-conjugate-based algorithm for finding solutions to monotone operator equations. The algorithm is a modification of the work of Li and Zheng in the sense that the uniformly monotone assumption on the operator is relaxed to just monotone. Furthermore, unlike the work of Li and Zheng, the search directions of the proposed algorithm are shown to be descent and bounded independent of the monotonicity assumption. Moreover, the global convergence is established under some appropriate assumptions. Finally, numerical examples on some test problems are provided to show the efficiency of the proposed algorithm compared to that of Li and Zheng.

1. Introduction

We desire in this work to propose an algorithm to solve the problem:where is monotone and Lipschitz continuous and is nonempty, closed, and convex.

Solving problems of form (1) are becoming interesting in recent years due to its appearance in many areas of science, engineering, and economy, for example, in forecasting of financial market [1], constrained neural networks [2], economic and chemical equilibrium problems [3, 4], signal and image processing [5, 6], phase retrieval [7, 8], power flow equations [9], nonnegative matrix factorisation [10, 11], and many more.

Some notable methods for finding solution to (1) are: Newton’s method, quasi-Newton method, Gauss–Newton method, Levenberg–Marquardt method, and their variants [1215]. These methods are prominent due to their fast convergence property. However, their convergence is local, and they require computing and storing of the Jacobian matrix at each iteration. In addition, there is a need to solve a linear equation at each iteration. These and other reasons make them unattractive especially for large-scale problems. To avoid the above drawbacks, methods that are globally convergent and also do not require computing and storing of the Jacobian matrix were introduced. Examples of such methods are the spectral (SG) and conjugate (CG) gradient methods. However, SG and CG methods for solving (1) are usually combined with the projection method proposed in [16]. For instance, Zhang and Zhou [17] extended the work of Birgin and Martinéz [18] for unconstrained optimization problems by combining it with the projection method and proposed a spectral gradient projection-based algorithm for solving (1). Dai et al. [19] extend the modified Perry’s CG method [20] for solving unconstrained optimization problem to solve (1) by combining it with the projection method. Liu and Li [21] incorporated the Dai-Yuan (DY) [22] CG method with the projection method and proposed a spectral Dai-Yuan (SDY) projection method for solving nonlinear monotone equations. The method was shown to be globally convergent under appropriate assumptions. Furthermore, to popularize and boost the efficiency of the DY CG method, Liu and Feng [23] proposed a spectral DY-type CG projection method (PDY), where the spectral parameter is derived such that the direction is descent. It is worth mentioning that all the methods mentioned above require the operator in (1) to be monotone. Recently, Li and Zheng [24] proposed scaled three-term derivative-free methods for solving (1). The method is an extension of the method proposed by Bojari and Eslahchi [25]. However, to establish the convergence of the method, Li and Zheng assume that the operator is uniformly monotone which is a stronger condition. Some other related ideas on spectral gradient-type and spectral conjugate gradient-type methods for finding solution to (1) were studied in [2641] and references therein.

In this work, motivated by the strong condition imposed on the operator by Li and Zheng [24], we seek to relax the condition on the operator from uniformly monotone to monotone. This is achieved by modifying the two search directions defined by Li and Zheng. In addition, the global convergence is established under the assumption that the operator is monotone and Lipschitz continuous. Numerical examples to support the theoretical results are also given.

Notations: unless or otherwise stated, the symbol stands for Euclidean norm on . is abbreviated to . Furthermore, is the projection mapping from onto given by , for a nonempty closed and convex set .

2. Motivation and Algorithm

In this section, we will begin by recalling a three-term spectral-conjugate gradient method for solving (1). Given an initial point , the method generates a sequence via the following formula:where and are the current and previous points, respectively. is the stepsize obtained via a line search and is the search direction defined aswhere , , and are parameters and .

Based on the three-term direction above, we will propose a modified scaled three-term derivative-free algorithms for solving (1). The algorithms are a modification of the two algorithms proposed by Li and Zheng [24]. The aim of the modification is to relax the uniformly monotone assumption on the operator. The search directions defined in [24] were shown to be bounded under the uniformly monotone assumption. Our main interest is to modify the search directions defined in [24] and prove their boundedness without requiring the uniformly monotone assumption. The directions in [24] are defined as follows:STDF1:STDF2:where

To obtain a lower bound for the term , Li and Zheng used the uniformly monotone assumption. So, in order to relax this condition, we replace the term in the directions defined by (4) and (5) with . In addition, we replace and in (4) and (5) with , in (5) with . Hence, we define the new directions as follows:PSTDF1:PSTDF2:where

Remark 1. From (10), a lower bound for the term is obtained without any assumption on the operator .
Let be the solution set of (1) and assume that the following holds.

Assumption 1. The constraint set is nonempty, closed, and convex.

Assumption 2. The operator is monotone, that is, :

Assumption 3. The operator is L-Lipschitz continuous on , that is, , ,In the following algorithm, we generate approximate solutions to problem (1) under Assumptions 13 Algorithm 1.

Algorithm 1. PSTDF.Input. Choose an initial guess , , , , , , and .Step 1. If , terminate. Else move to Step 2.Step 2. Compute using (7) or (8).Step 3. Compute, for , where is the least nonnegative integer satisfyingStep 4. If and , then stop. Else, computewhereStep 5. Let and repeat from Step 1.

3. Theoretical Results

In this section, we will establish the convergence analysis of the proposed algorithm. However, we require the following important lemmas. The following lemma shows that the proposed directions are descent.

Lemma 1. The search directions defined by (7) and (8) satisfy the sufficient descent condition.

Proof. Multiplying both sides of (7) by , we haveAlso, multiplying both sides of (8) by , we haveHence, for all , the directions defined by (7) and (8) satisfyThe lemma below shows that the linesearch (14) is well-defined and the stepsize is bounded away from zero.

Lemma 2 (see [5]). Suppose Assumptions 13 are satisfied. If , , and are sequences defined by (7), (13), and (15), respectively, then(i)For all , there is satisfying (14) for some and .(ii) obtained via (14) satisfies

Lemma 3 (see [5]). Suppose Assumptions 13 are fulfilled, then the sequences and defined by (13) and (15) are bounded. Furthermore,

Lemma 4 (see [5]). From Lemma 3, we have

Remark 2. Since is bounded from Lemma 3 and is continuous from Assumption 3, is also bounded. That is, there exists such that, for all ,All are now set to establish the convergence of the proposed algorithm.

Theorem 1. Suppose Assumptions 1 and 2 are satisfied. If is a sequence defined by (15), thenFurthermore, the sequence converges to a solution of problem (1).

Proof. Suppose that , then there is a positive constant such that, for all ,By (17), (18), and the Cauchy–Schwartz inequality, we have that, for all ,To complete the proof of the theorem, we need to show that the search direction defined by (7) and (8) are bounded.
For , we haveNow for , using (7), (10), (12), and (26), we haveSimilarly, from (8),Letting and , then for all ,since .
Multiplying (20) by , we getThis contradicts (21) and hence .
Because is a continuous function and (24) holds, then the sequence has some accumulation point say for which , that is, is a solution of (1). From (22), it holds that converges, and since is an accumulation point of , converges to .

4. Numerical Examples on Monotone Operator Equations

This segment of the paper would demonstrate the computational efficiency of the PSTDF algorithm relative to STDF algorithm [24]. For PSTDF algorithm, we have PSTDF1 which corresponds to the direction defined by (7) and PSTDF2 corresponding to the one defined by (8). Similarly, for the STDF algorithm, we have STDF1 and STDF2 corresponding to (4) and (5), respectively. The parameters chosen for the implementation of the PSTDF algorithm are , and . The parameters for STDF algorithm are chosen as reported in [24]. The metrics considered are the number of iteration (NOI), number of function evaluations (NFE), and the CPU time (TIME). We used eight test problems with dimension , 5000, , , and and five initial points . The algorithms were coded in MATLAB R2019a and run on a PC with Intel (R) Core (TM) i3-7100U processor with 8 GB RAM and CPU 2.40 GHz. The iteration process is stopped whenever . Failure is declared if this condition is not satisfied after 1000 iterations.

Table 1 consists of the test problems considered, where the function is and .


S/NProblem and reference

1Modified exponential function 2 [42]
2Logarithmic function [42]
3Nonsmooth function [43]
4Strictly convex function I [42]
5Tridiagonal exponential function [44]
6Nonsmooth function [45]
7Problem 4 in [46]
8Problem 9 in [32]

The result of the experiments in Tabular form can be found in the link https://documentcloud.adobe.com/link/review?uri=urn:aaid:scds:US:77a9a900-2156-4344-a9d9-b42e3a3dc8e5. It can be observed from the results that the algorithms successfully solved all the problems considered without a single failure. However, to better illustrate the performance of each algorithm, we employ the Dolan and Moré [47] performance profiles and plot Figures 13. Figures 13 represent the performance of the algorithms based on NOI, NFE, and TIME, respectively. In terms of NOI (Figure 1), the best performing algorithm is PSTDF2 with success, followed by PSTDF1 with success. STDF1 and STDF2 record less than success each. Based on NFE (Figure 2), the best performing algorithm is PSTDF1 with around success, followed by PSTDF2 with almost success. STDF1 and STDF2 record and around success, respectively. Lastly, in terms of TIME (Figure 3), PSTDF2 performs better with around success, followed by PSTDF1 with more than success. STDF1 and STDF2 record around and success, respectively. Overall, we can conclude that PSTDF1 and PSTDF2 outperform STDF1 and STDF2 based on the metrics considered.

5. Conclusions

In this paper, a modified scaled algorithm based on the spectral-conjugate gradient method for solving nonlinear monotone operator equations was proposed. The algorithm replaces the stronger assumption of uniformly monotone on the operator in the work of Li and Zheng (2020) with just monotone, which is weaker. Interestingly, the search directions were shown to be descent independent of line search and also without monotonicity assumption (unlike in the work of Li and Zheng). Furthermore, the convergence results were established under monotonicity and Lipschitz continuity assumptions on the operator. Numerical experiments on some benchmark problems were conducted to illustrate the good performance of the proposed algorithm.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

All authors contributed equally to the manuscript and read and approved the final manuscript.

Acknowledgments

The first, fifth, and the sixth authors acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University. The second author was financially supported by the Rajamangala University of Technology Phra Nakhon (RMUTP) Research Scholarship.

References

  1. Z. Dai, X. Dong, J. Kang, and L. Hong, “Forecasting stock market returns: new technical indicators and two-step economic constraint method,” The North American Journal of Economics and Finance, vol. 53, Article ID 101216, 2020. View at: Publisher Site | Google Scholar
  2. C. Jan and J. M. Zurada, “Learning understandable neural networks with nonnegative weight constraints,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 1, pp. 62–69, 2014. View at: Publisher Site | Google Scholar
  3. S. P. Dirkse and M. C. Ferris, “Mcplib: a collection of nonlinear mixed complementarity problems,” Optimization Methods and Software, vol. 5, no. 4, pp. 319–345, 1995. View at: Publisher Site | Google Scholar
  4. M. Keith and A. P. Morgan, “A methodology for solving chemical equilibrium systems,” Applied Mathematics and Computation, vol. 22, no. 4, pp. 333–361, 1987. View at: Google Scholar
  5. A. B. Abubakar, P. Kumam, and H. Mohammad, “A note on the spectral gradient projection method for nonlinear monotone equations with applications,” Computational and Applied Mathematics, vol. 39, no. 129, 2020. View at: Publisher Site | Google Scholar
  6. A. B. Abubakar, P. Kumam, H. Mohammad, and A. M. Awwal, “A Barzilai-Borwein gradient projection method for sparse signal and blurred image restoration,” Journal of the Franklin Institute, vol. 357, no. 11, pp. 7266–7285, 2020. View at: Publisher Site | Google Scholar
  7. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: theory and algorithms,” IEEE Transactions on Information Theory, vol. 61, no. 4, pp. 1985–2007, 2015. View at: Publisher Site | Google Scholar
  8. H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, “A nonconvex approach for phase retrieval: reshaped wirtinger flow and incremental algorithms,” The Journal of Machine Learning Research, vol. 18, no. 141, pp. 1–35, 2017. View at: Google Scholar
  9. A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power Generation, Operation, and Control, John Wiley & Sons, Hoboken, NJ, USA, 2013.
  10. M. W. Berry, M. Browne, A. N. Langville, V. P. Pauca, and R. J. Plemmons, “Algorithms and applications for approximate nonnegative matrix factorization,” Computational Statistics & Data Analysis, vol. 52, no. 1, pp. 155–173, 2007. View at: Publisher Site | Google Scholar
  11. D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of the 13th International Conference on Neural Information Processing System, pp. 556–562, Cambridge, MA, USA, Janauary 2000. View at: Google Scholar
  12. J. E. Dennis and J. J. Moré, “A characterization of superlinear convergence and its application to quasi-Newton methods,” Mathematics of Computation, vol. 28, no. 126, 549 pages, 1974. View at: Publisher Site | Google Scholar
  13. D. Li and M. Fukushima, “A globally and superlinearly convergent gauss--newton-based BFGS method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp. 152–172, 1999. View at: Publisher Site | Google Scholar
  14. G. Zhou and K. C. Toh, “Superlinear convergence of a Newton-type algorithm for monotone equations,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 205–221, 2005. View at: Publisher Site | Google Scholar
  15. W.-J. Zhou and D.-H. Li, “A globally convergent bfgs method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008. View at: Publisher Site | Google Scholar
  16. M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Springer, Berlin, Germany, 1998. View at: Google Scholar
  17. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006. View at: Publisher Site | Google Scholar
  18. E. G. Birgin and J. M. Martinez, “A spectral conjugate gradient method for unconstrained optimization,” Applied Mathematics and Optimization, vol. 43, no. 2, pp. 117–128, 2001. View at: Publisher Site | Google Scholar
  19. Z. Dai, X. Chen, and F. Wen, “A modified Perry’s conjugate gradient method-based derivative-free method for solving large-scale nonlinear monotone equations,” Applied Mathematics and Computation, vol. 270, pp. 378–386, 2015. View at: Publisher Site | Google Scholar
  20. I. E. Livieris and P. Pintelas, “Globally convergent modified Perry’s conjugate gradient method,” Applied Mathematics and Computation, vol. 218, no. 18, pp. 9197–9207, 2012. View at: Publisher Site | Google Scholar
  21. J. Liu and S. Li, “Spectral DY-type projection method for nonlinear monotone systems of equations,” Journal of Computational Mathematics, vol. 33, no. 4, pp. 341–355, 2015. View at: Publisher Site | Google Scholar
  22. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at: Publisher Site | Google Scholar
  23. J. Liu and Y. Feng, “A derivative-free iterative method for nonlinear monotone equations with convex constraints,” Numerical Algorithms, vol. 82, no. 1, pp. 245–262, 2019. View at: Publisher Site | Google Scholar
  24. Q. Li and B. Zheng, “Scaled three-term derivative-free methods for solving large-scale nonlinear monotone equations,” Numerical Algorithms, vol. 1–25, 2020. View at: Publisher Site | Google Scholar
  25. S. Bojari and M. R. Eslahchi, “Two families of scaled three-term conjugate gradient methods with sufficient descent property for nonconvex optimization,” Numerical Algorithms, vol. 83, no. 3, pp. 901–933, 2020. View at: Publisher Site | Google Scholar
  26. A. B. Abubakar, P. Kumam, and A. M. Awwal, “A descent dai-liao projection method for convex constrained nonlinear monotone equations with applications,” Thai Journal of Mathematics, vol. 17, no. 1, pp. 128–152, 2018. View at: Google Scholar
  27. A. B. Abubakar, P. Kumam, A. H. Ibrahim, and J. Rilwan, “Derivative-free HS-DY-type method for solving nonlinear equations and image restoration,” Heliyon, vol. 6, no. 11, Article ID e05400, 2020. View at: Publisher Site | Google Scholar
  28. A. B. Abubakar, P. Kumam, M. Hassan, and A. H. Ibrahim, “Prp-like algorithm for monotone operator equations,” Japan Journal of Industrial and Applied Mathematics, pp. 1–18, 2021. View at: Publisher Site | Google Scholar
  29. A. B. Abubakar, K. Muangchoo, A. H. Ibrahim, J. Abubakar, and S. A. Rano, “Fr-type algorithm for finding approximate solutions to nonlinear monotone operator equations,” Arabian Journal of Mathematics, pp. 1–10, 2021. View at: Publisher Site | Google Scholar
  30. A. B. Abubakar, K. Muangchoo, A. H. Ibrahim, A. B. Muhammad, L. O. Jolaoso, and K. O. Aremu, “A new three-term hestenes-stiefel type method for nonlinear monotone operator equations and image restoration,” IEEE Access, vol. 9, pp. 18262–18277, 2021. View at: Google Scholar
  31. A. H. Ibrahim, P. Kumam, A. B. Abubakar, J. Abubakar, and A. B. Muhammad, “Least-square-based three-term conjugate gradient projection method for -norm problems with application to compressed sensing,” Mathematics, vol. 8, no. 4, p. 602, 2020. View at: Google Scholar
  32. A. H. Ibrahim, P. Kumam, A. B. Abubakar, W. Jirakitpuwapat, and J. Abubakar, “A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing,” Heliyon, vol. 6, no. 3, Article ID e03466, 2020. View at: Publisher Site | Google Scholar
  33. A. H. Ibrahim, P. Kumam, A. B. Abubakar, U. B. Yusuf, and J. Rilwan, “Derivative-free conjugate residual algorithms for convex constraints nonlinear monotone equations and signal recovery,” Journal of Nonlinear and Convex Analysis, vol. 21, no. 9, pp. 1959–1972, 2020. View at: Google Scholar
  34. A. H. Ibrahim, P. Kumam, A. B. Abubakar, U. B. Yusuf, S. E. Yimer, and K. O. Aremu, “An efficient gradient-free projection algorithm for constrained nonlinear equations and image restoration,” AIMS Mathematics, vol. 6, no. 1, p. 235, 2020. View at: Publisher Site | Google Scholar
  35. A. H. Ibrahim, K. Muangchoo, A. B. Abubakar, A. D. Adedokun, and H. Mohammed, “Spectral conjugate gradient like method for signal reconstruction,” Thai Journal of Mathematics, vol. 18, no. 4, pp. 2013–2022, 2020. View at: Google Scholar
  36. A. Hassan Ibrahima, K. Muangchoob, N. S. Mohamedc, and A. B. Abubakard, “Derivative-free smr conjugate gradient method for con-straint nonlinear equations,” Journal of Mathematics and Computer Science, vol. 24, no. 2, pp. 147–164, 2022. View at: Google Scholar
  37. L. Liu, “Shrinking projection method for solving zero point and fixed point problems in banach spaces,” Journal of Nonlinear and Variational Analysis, vol. 4, no. 3, pp. 439–454, 2020. View at: Publisher Site | Google Scholar
  38. A. Mayowa and G. Igor, “Augmented Lagrangian fast projected gradient algorithm with working set selection for training support vector machines,” Journal of Applied and Numerical Optimization, vol. 3, no. 3–20, 2021. View at: Google Scholar
  39. H. Mohammad and A. B. Abubakar, “A positive spectral gradient-like method for large-scale nonlinear monotone equations,” Bulletin of Computational and Applied Mathematics, vol. 5, no. 1, pp. 99–115, 2017. View at: Google Scholar
  40. H. Mohammad and A. Bala Abubakar, “A descent derivative-free algorithm for nonlinear monotone equations with convex constraints,” RAIRO-Operations Research, vol. 54, no. 2, pp. 489–505, 2020. View at: Publisher Site | Google Scholar
  41. O. K. Oyewole and O. T. Mewomo, “A subgradient extragradient algorithm for solving split equilibrium and fixed point problems in reflexive banach spaces,” Journal of Nonlinear Functional Analysis, vol. 19, p. 2020, 2020. View at: Google Scholar
  42. W. La Cruz, J. M. Martínez, and M. Raydan, “Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,” Mathematics of Computation, vol. 75, no. 255, pp. 1429–1449, 2006. View at: Publisher Site | Google Scholar
  43. W. Zhou and D. H. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007. View at: Google Scholar
  44. B. Yang and L. Gao, “An efficient implementation of merrill’s method for sparse or partially separable systems of nonlinear equations,” SIAM Journal on Optimization, vol. 1, no. 2, pp. 206–221, 1991. View at: Publisher Site | Google Scholar
  45. Z. Yu, J. Lin, J. Sun, Y. Xiao, L. Liu, and Z. Li, “Spectral gradient projection method for monotone nonlinear equations with convex constraints,” Applied Numerical Mathematics, vol. 59, no. 10, pp. 2416–2423, 2009. View at: Publisher Site | Google Scholar
  46. Y. Ding, Y. Xiao, and J. Li, “A class of conjugate gradient methods for convex constrained monotone equations,” Optimization, vol. 66, no. 12, pp. 2309–2328, 2017. View at: Publisher Site | Google Scholar
  47. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar

Copyright © 2021 Auwal Bala Abubakar et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views298
Downloads336
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.