A Modified Scaled Spectral-Conjugate Gradient-Based Algorithm for Solving Monotone Operator Equations
This paper proposes a modified scaled spectral-conjugate-based algorithm for finding solutions to monotone operator equations. The algorithm is a modification of the work of Li and Zheng in the sense that the uniformly monotone assumption on the operator is relaxed to just monotone. Furthermore, unlike the work of Li and Zheng, the search directions of the proposed algorithm are shown to be descent and bounded independent of the monotonicity assumption. Moreover, the global convergence is established under some appropriate assumptions. Finally, numerical examples on some test problems are provided to show the efficiency of the proposed algorithm compared to that of Li and Zheng.
We desire in this work to propose an algorithm to solve the problem:where is monotone and Lipschitz continuous and is nonempty, closed, and convex.
Solving problems of form (1) are becoming interesting in recent years due to its appearance in many areas of science, engineering, and economy, for example, in forecasting of financial market , constrained neural networks , economic and chemical equilibrium problems [3, 4], signal and image processing [5, 6], phase retrieval [7, 8], power flow equations , nonnegative matrix factorisation [10, 11], and many more.
Some notable methods for finding solution to (1) are: Newton’s method, quasi-Newton method, Gauss–Newton method, Levenberg–Marquardt method, and their variants [12–15]. These methods are prominent due to their fast convergence property. However, their convergence is local, and they require computing and storing of the Jacobian matrix at each iteration. In addition, there is a need to solve a linear equation at each iteration. These and other reasons make them unattractive especially for large-scale problems. To avoid the above drawbacks, methods that are globally convergent and also do not require computing and storing of the Jacobian matrix were introduced. Examples of such methods are the spectral (SG) and conjugate (CG) gradient methods. However, SG and CG methods for solving (1) are usually combined with the projection method proposed in . For instance, Zhang and Zhou  extended the work of Birgin and Martinéz  for unconstrained optimization problems by combining it with the projection method and proposed a spectral gradient projection-based algorithm for solving (1). Dai et al.  extend the modified Perry’s CG method  for solving unconstrained optimization problem to solve (1) by combining it with the projection method. Liu and Li  incorporated the Dai-Yuan (DY)  CG method with the projection method and proposed a spectral Dai-Yuan (SDY) projection method for solving nonlinear monotone equations. The method was shown to be globally convergent under appropriate assumptions. Furthermore, to popularize and boost the efficiency of the DY CG method, Liu and Feng  proposed a spectral DY-type CG projection method (PDY), where the spectral parameter is derived such that the direction is descent. It is worth mentioning that all the methods mentioned above require the operator in (1) to be monotone. Recently, Li and Zheng  proposed scaled three-term derivative-free methods for solving (1). The method is an extension of the method proposed by Bojari and Eslahchi . However, to establish the convergence of the method, Li and Zheng assume that the operator is uniformly monotone which is a stronger condition. Some other related ideas on spectral gradient-type and spectral conjugate gradient-type methods for finding solution to (1) were studied in [26–41] and references therein.
In this work, motivated by the strong condition imposed on the operator by Li and Zheng , we seek to relax the condition on the operator from uniformly monotone to monotone. This is achieved by modifying the two search directions defined by Li and Zheng. In addition, the global convergence is established under the assumption that the operator is monotone and Lipschitz continuous. Numerical examples to support the theoretical results are also given.
Notations: unless or otherwise stated, the symbol stands for Euclidean norm on . is abbreviated to . Furthermore, is the projection mapping from onto given by , for a nonempty closed and convex set .
2. Motivation and Algorithm
In this section, we will begin by recalling a three-term spectral-conjugate gradient method for solving (1). Given an initial point , the method generates a sequence via the following formula:where and are the current and previous points, respectively. is the stepsize obtained via a line search and is the search direction defined aswhere , , and are parameters and .
Based on the three-term direction above, we will propose a modified scaled three-term derivative-free algorithms for solving (1). The algorithms are a modification of the two algorithms proposed by Li and Zheng . The aim of the modification is to relax the uniformly monotone assumption on the operator. The search directions defined in  were shown to be bounded under the uniformly monotone assumption. Our main interest is to modify the search directions defined in  and prove their boundedness without requiring the uniformly monotone assumption. The directions in  are defined as follows: STDF1: STDF2:where
To obtain a lower bound for the term , Li and Zheng used the uniformly monotone assumption. So, in order to relax this condition, we replace the term in the directions defined by (4) and (5) with . In addition, we replace and in (4) and (5) with , in (5) with . Hence, we define the new directions as follows: PSTDF1: PSTDF2:where
Assumption 1. The constraint set is nonempty, closed, and convex.
Assumption 2. The operator is monotone, that is, :
Algorithm 1. PSTDF. Input. Choose an initial guess , , , , , , and . Step 1. If , terminate. Else move to Step 2. Step 2. Compute using (7) or (8). Step 3. Compute , for , where is the least nonnegative integer satisfying Step 4. If and , then stop. Else, compute where Step 5. Let and repeat from Step 1.
3. Theoretical Results
In this section, we will establish the convergence analysis of the proposed algorithm. However, we require the following important lemmas. The following lemma shows that the proposed directions are descent.
Proof. Multiplying both sides of (7) by , we haveAlso, multiplying both sides of (8) by , we haveHence, for all , the directions defined by (7) and (8) satisfyThe lemma below shows that the linesearch (14) is well-defined and the stepsize is bounded away from zero.
Lemma 2 (see ). Suppose Assumptions 1–3 are satisfied. If , , and are sequences defined by (7), (13), and (15), respectively, then(i)For all , there is satisfying (14) for some and .(ii) obtained via (14) satisfies
Remark 2. Since is bounded from Lemma 3 and is continuous from Assumption 3, is also bounded. That is, there exists such that, for all ,All are now set to establish the convergence of the proposed algorithm.
Proof. Suppose that , then there is a positive constant such that, for all ,By (17), (18), and the Cauchy–Schwartz inequality, we have that, for all ,To complete the proof of the theorem, we need to show that the search direction defined by (7) and (8) are bounded.
For , we haveNow for , using (7), (10), (12), and (26), we haveSimilarly, from (8),Letting and , then for all ,since .
Multiplying (20) by , we getThis contradicts (21) and hence .
Because is a continuous function and (24) holds, then the sequence has some accumulation point say for which , that is, is a solution of (1). From (22), it holds that converges, and since is an accumulation point of , converges to .
4. Numerical Examples on Monotone Operator Equations
This segment of the paper would demonstrate the computational efficiency of the PSTDF algorithm relative to STDF algorithm . For PSTDF algorithm, we have PSTDF1 which corresponds to the direction defined by (7) and PSTDF2 corresponding to the one defined by (8). Similarly, for the STDF algorithm, we have STDF1 and STDF2 corresponding to (4) and (5), respectively. The parameters chosen for the implementation of the PSTDF algorithm are , and . The parameters for STDF algorithm are chosen as reported in . The metrics considered are the number of iteration (NOI), number of function evaluations (NFE), and the CPU time (TIME). We used eight test problems with dimension , 5000, , , and and five initial points . The algorithms were coded in MATLAB R2019a and run on a PC with Intel (R) Core (TM) i3-7100U processor with 8 GB RAM and CPU 2.40 GHz. The iteration process is stopped whenever . Failure is declared if this condition is not satisfied after 1000 iterations.
Table 1 consists of the test problems considered, where the function is and .
The result of the experiments in Tabular form can be found in the link https://documentcloud.adobe.com/link/review?uri=urn:aaid:scds:US:77a9a900-2156-4344-a9d9-b42e3a3dc8e5. It can be observed from the results that the algorithms successfully solved all the problems considered without a single failure. However, to better illustrate the performance of each algorithm, we employ the Dolan and Moré  performance profiles and plot Figures 1–3. Figures 1–3 represent the performance of the algorithms based on NOI, NFE, and TIME, respectively. In terms of NOI (Figure 1), the best performing algorithm is PSTDF2 with success, followed by PSTDF1 with success. STDF1 and STDF2 record less than success each. Based on NFE (Figure 2), the best performing algorithm is PSTDF1 with around success, followed by PSTDF2 with almost success. STDF1 and STDF2 record and around success, respectively. Lastly, in terms of TIME (Figure 3), PSTDF2 performs better with around success, followed by PSTDF1 with more than success. STDF1 and STDF2 record around and success, respectively. Overall, we can conclude that PSTDF1 and PSTDF2 outperform STDF1 and STDF2 based on the metrics considered.
In this paper, a modified scaled algorithm based on the spectral-conjugate gradient method for solving nonlinear monotone operator equations was proposed. The algorithm replaces the stronger assumption of uniformly monotone on the operator in the work of Li and Zheng (2020) with just monotone, which is weaker. Interestingly, the search directions were shown to be descent independent of line search and also without monotonicity assumption (unlike in the work of Li and Zheng). Furthermore, the convergence results were established under monotonicity and Lipschitz continuity assumptions on the operator. Numerical experiments on some benchmark problems were conducted to illustrate the good performance of the proposed algorithm.
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
All authors contributed equally to the manuscript and read and approved the final manuscript.
The first, fifth, and the sixth authors acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University. The second author was financially supported by the Rajamangala University of Technology Phra Nakhon (RMUTP) Research Scholarship.
M. Keith and A. P. Morgan, “A methodology for solving chemical equilibrium systems,” Applied Mathematics and Computation, vol. 22, no. 4, pp. 333–361, 1987.View at: Google Scholar
H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, “A nonconvex approach for phase retrieval: reshaped wirtinger flow and incremental algorithms,” The Journal of Machine Learning Research, vol. 18, no. 141, pp. 1–35, 2017.View at: Google Scholar
A. J. Wood, B. F. Wollenberg, and G. B. Sheblé, Power Generation, Operation, and Control, John Wiley & Sons, Hoboken, NJ, USA, 2013.
D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of the 13th International Conference on Neural Information Processing System, pp. 556–562, Cambridge, MA, USA, Janauary 2000.View at: Google Scholar
M. V. Solodov and B. F. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Springer, Berlin, Germany, 1998.View at: Google Scholar
A. B. Abubakar, P. Kumam, and A. M. Awwal, “A descent dai-liao projection method for convex constrained nonlinear monotone equations with applications,” Thai Journal of Mathematics, vol. 17, no. 1, pp. 128–152, 2018.View at: Google Scholar
A. B. Abubakar, K. Muangchoo, A. H. Ibrahim, A. B. Muhammad, L. O. Jolaoso, and K. O. Aremu, “A new three-term hestenes-stiefel type method for nonlinear monotone operator equations and image restoration,” IEEE Access, vol. 9, pp. 18262–18277, 2021.View at: Google Scholar
A. H. Ibrahim, P. Kumam, A. B. Abubakar, J. Abubakar, and A. B. Muhammad, “Least-square-based three-term conjugate gradient projection method for -norm problems with application to compressed sensing,” Mathematics, vol. 8, no. 4, p. 602, 2020.View at: Google Scholar
A. H. Ibrahim, P. Kumam, A. B. Abubakar, U. B. Yusuf, and J. Rilwan, “Derivative-free conjugate residual algorithms for convex constraints nonlinear monotone equations and signal recovery,” Journal of Nonlinear and Convex Analysis, vol. 21, no. 9, pp. 1959–1972, 2020.View at: Google Scholar
A. H. Ibrahim, K. Muangchoo, A. B. Abubakar, A. D. Adedokun, and H. Mohammed, “Spectral conjugate gradient like method for signal reconstruction,” Thai Journal of Mathematics, vol. 18, no. 4, pp. 2013–2022, 2020.View at: Google Scholar
A. Hassan Ibrahima, K. Muangchoob, N. S. Mohamedc, and A. B. Abubakard, “Derivative-free smr conjugate gradient method for con-straint nonlinear equations,” Journal of Mathematics and Computer Science, vol. 24, no. 2, pp. 147–164, 2022.View at: Google Scholar
A. Mayowa and G. Igor, “Augmented Lagrangian fast projected gradient algorithm with working set selection for training support vector machines,” Journal of Applied and Numerical Optimization, vol. 3, no. 3–20, 2021.View at: Google Scholar
H. Mohammad and A. B. Abubakar, “A positive spectral gradient-like method for large-scale nonlinear monotone equations,” Bulletin of Computational and Applied Mathematics, vol. 5, no. 1, pp. 99–115, 2017.View at: Google Scholar
O. K. Oyewole and O. T. Mewomo, “A subgradient extragradient algorithm for solving split equilibrium and fixed point problems in reflexive banach spaces,” Journal of Nonlinear Functional Analysis, vol. 19, p. 2020, 2020.View at: Google Scholar
W. Zhou and D. H. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational Mathematics, vol. 25, no. 1, pp. 89–96, 2007.View at: Google Scholar