Special Issue

## Machine Learning and its Applications in Image Restoration

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 7945467 | https://doi.org/10.1155/2020/7945467

Haishan Feng, Tingting Li, "An Accelerated Conjugate Gradient Algorithm for Solving Nonlinear Monotone Equations and Image Restoration Problems", Mathematical Problems in Engineering, vol. 2020, Article ID 7945467, 12 pages, 2020. https://doi.org/10.1155/2020/7945467

# An Accelerated Conjugate Gradient Algorithm for Solving Nonlinear Monotone Equations and Image Restoration Problems

Guest Editor: Wenjie Liu
Accepted25 Aug 2020
Published05 Oct 2020

#### Abstract

Combining the three-term conjugate gradient method of Yuan and Zhang and the acceleration step length of Andrei with the hyperplane projection method of Solodov and Svaiter, we propose an accelerated conjugate gradient algorithm for solving nonlinear monotone equations in this paper. The presented algorithm has the following properties: (i) All search directions generated by the algorithm satisfy the sufficient descent and trust region properties independent of the line search technique. (ii) A derivative-free search technique is proposed along the direction to obtain the step length . (iii) If , then an acceleration scheme is used to modify the step length in a multiplicative manner and create a point. (iv) If the point satisfies the given condition, then it is the next point; otherwise, the hyperplane projection technique is used to obtain the next point. (v) The global convergence of the proposed algorithm is established under some suitable conditions. Numerical comparisons with other conjugate gradient algorithms show that the accelerated computing scheme is more competitive. In addition, the presented algorithm can also be applied to image restoration.

#### 1. Introduction

In this paper, the following nonlinear equation is considered:where is continuous and monotone, and satisfies

It is not difficult to show that the solution set of monotone equation (1), unless empty, is convex. This problem has many significant applications in applied mathematics, economics, and engineering. For example, the economic equilibrium problem [1] can be transformed into problem (1). Generally, an iteration formula generates the next iteration point bywhere is the step length and is a search direction, which are two important factors for solving nonlinear equations. Some derivative-free line search techniques [25] were proposed to search for step length . Li and Li [6] presented a derivative-free line search to find such thatwhere and . represents the Euclidean norm. The line search technique (4) is different from other existing derivative-free line search techniques because it does not use a merit function. If satisfies , the inequality (4) holds for all sufficiently small . As a result, can be obtained by some backtracking processes.

For , it is well known that the Newton methods [7, 8], quasi-Newton methods [913], spectral gradient methods [14, 15], and conjugate gradient methods [16, 17] can deal with large-scale nonlinear equations. For solving large-scale optimization problems, the conjugate gradient methods are quite effective since they only calculate and store the gradient value of the objective function. Many scholars have applied conjugate gradient theory to solve nonlinear monotone equations and have achieved good results [1719]. Classical conjugate gradient methods include HS method [20], FR method [21], PRP method [22, 23], LS method [24], CD method [25], and DY method [26]. In particular, the PRP method, as one of the most effective methods, generates a small step near a minimum point; then, the subsequence generated for the search direction will automatically approach the negative gradient direction, which avoids continuously generating a small step. However, the global convergence of the PRP method is not established under inexact line search techniques for general functions. Many scholars have performed continuous research and have reached satisfactory conclusions. Zhang [27] proposed the MPRP method, where is designed as follows:where , , , , and . It is easy to obtain from (5) that

The above equation indicates that is a descent direction of at . If the exact line search is used, then we have . Consequently, formula (5) is inferred to be the standard PRP method. Under some mild conditions, the MPRP method is globally convergent under the Armijo-type line search, but global convergence cannot be established under the weak Wolfe–Powell line search. The main reason is that the MPRP method does not satisfy the trust region property. Inspired by the above discussions, Yuan and Zhang [28] proposed a three-term PRP (TTPRP) method in which is defined bywhere is a constant. It is worth noting that the denominator in the formula of the MPRP method is adjusted to in the formula of the TTPRP method. The TTPRP method automatically maintains the trust region property. Its global convergence is also established under certain conditions. Numerical results show that the TTPRP method is effective for large-scale nonlinear monotone equations.

In addition, the hyperplane projection method [17, 2932] is the most effective method for solving large-scale nonlinear monotone equations. The concept of projection was first proposed by Goldstein [33] for convex programming in a Hilbert space. Furthermore, Solodov and Svaiter [34] proposed the hyperplane projection method for solving optimization problems. The specific process of the hyperplane projection method is as follows: let be the current iteration point, and obtain a point along a certain line search direction such that . Due to the monotonicity of , for a certain point that satisfies , it can be deduced that

Obviously, the hyperplane strictly separates the current iteration point from the solution set of the equation (1). The point is projected onto the hyperplane to obtain the next iteration point , i.e.,

The hyperplane projection method has been proved to possess good theoretical properties and numerical performance for nonlinear monotone equations [6, 34].

Furthermore, we know that the search directions tend to be poorly scaled in conjugate gradient methods. Consequently, in the line search, more function evaluations must be carried out to obtain an appropriate step length . Andrei [35] presented an acceleration scheme that modifies the step length in a multiplicative manner to improve the reduction of the function values along the iterations. The step length is defined as follows:where . , . If , let . A numerical comparison with some conjugate gradient algorithms shows that the computational scheme is effective.

Inspired by the above discussions, we proposed an accelerated conjugate gradient algorithm that combines the TTPRP method, the acceleration step length, and the hyperplane projection method. The main contributions of the algorithm are as follows:An accelerated conjugate gradient algorithm is introduced for solving nonlinear monotone equationsAll search directions of the algorithm satisfy the sufficient descent conditionAll search directions of the algorithm belong to a trust regionThe global convergence of the presented algorithm is provedThe numerical results show that the proposed algorithm is more effective for nonlinear monotone equationsThe algorithm can be applied to restore an original image from an image damaged by impulse noise

This paper is organized as follows: in the next section, we discuss the ATTPRP algorithm and global convergence analysis. In Section 3, we report the preliminary numerical experiments to show that the algorithm is efficient for nonlinear monotone equations and applicable to image restoration problems. In Section 4, the conclusion regarding the proposed algorithm is given.

#### 2. Accelerated Algorithm and Convergence Analysis

In this section, we will propose an accelerated algorithm and prove its global convergence. The steps of the given algorithm are as follows.

##### 2.1. Accelerated Three-Term PRP Conjugate Gradient (ATTPRP) Algorithm

Step 0: choose any as the initial point and constants , , , , and , let .Step 1: stop if . Otherwise, compute by using formula (5).Step 2: choose satisfying the inequality (4).Step 3: if , then .Step 4: let the next iterative value be .Step 5: if , stop and let . Otherwise, determine using formula (9).Step 6: let . Go to Step 1.

The following lemma shows that the search direction designed by using formula (7) has not only the sufficient descent property but also the trust region property independent of the line search.

Lemma 1. is defined by using formula (7); then, we obtain

Proof. If , formulas (7) and (12) are obviously true. If , we obtain from formula (7) thatIn addition, by formula (7), we get andwhere . Then, the proof is completed.
The following assumption need to be established in order to study some properties of the ATTPRP algorithm:

Assumption 1. (i)The solution set of the problem (1) is nonempty.(ii)The function is Lipschitz continuous on ; that is, there exists a positive constant satisfying

Remark 1. Assumption 1(ii) implies that is bounded; then, there exists a constant such thatIn the following paper, if not specifically stated, we always assume that the conditions in Assumption 1 hold.

Lemma 2. Let and be generated by using the ATTPRP algorithm. The step length generated by the ATTPRP algorithm satisfieswhere and .

Proof. By line search (4), assuming , let , by the definition , does not satisfy the line search (4). That is,Since is Lipschitz continuous and by using formula (11), we havenamely,This leads to the ideal inequality (17). The proof is completed.
The following lemma is similar to Lemma 1 in the study of Solodov and Svaiter [34], which also holds for the ATTPRP algorithm. Therefore, we only state it as follows but omit its proof.

Lemma 3. Let the sequence be generated by using the ATTPRP algorithm. Suppose that is a solution of problem (1) with . We obtainIn particular, the sequence is bounded, and

Remark 2. The above lemma reveals that the distance from the iterative points to the solution set of the problem (1) decreases along iterations. Otherwise, for any , it is followed from formulas (9) and (4) thatParticularly, we obtainIn the following part, the global convergence and the strong global convergence properties of the ATTPRP algorithm will be proven.

Theorem 1. Let be generated by using the ATTPRP algorithm. Then, we have

Proof. We will prove this theorem by contradiction. Supposing that the equation (25) does not hold, there exists a constant such that holds for all . From formula (12), we haveAccording to Lemma 3 and equation (24), the sequences and are bounded. By formulas (7) and (16), for all , we obtainwhere . Since converges to zero, the last inequality shows that is bounded. By formulas (12) and (16), we obtainwhere . Thus, from the formulas (16) and (17), we obtain thatThis contradicts with formula (24). Consequently, the proof is completed.
The following theorem indicates the strong global convergence of the ATTPRP algorithm, which is similar to Theorem 1 in [6]. We also give a specific proof for convenience of understanding.

Theorem 2. Let be generated by using the ATTPRP algorithm. Then, the whole sequence converges to a solution of the problem (1).

Proof. Theorem 1 shows that there exists a subsequence of converging to a solution of the problem (1). On the other side, it follows from Lemma 3 that the sequence converges. Therefore, the whole sequence converges to .

#### 3. Numerical Experiments

In this section, the numerical experiments will be divided into two parts for illustration. The first subsection involves normal nonlinear equations, and the second subsection describes image restoration problems. All tests in this section are coded in MATLAB R2017a, run on a PC with Intel (R) Core (TM) i5-4460 3.20 GHz, 8.00 GB of SDRAM memory, and Windows 7 operating system.

##### 3.1. Normal Nonlinear Equations

In this subsection, we perform some numerical experiments to show the effectiveness of the ATTPRP algorithm. Some test problems and their relevant initial points are listed as follows:

Function 1. Exponential Function 1:Initial guess: .

Function 2. Exponential Function 2:Initial guess: .

Function 3. Singular function:Initial guess: .

Function 4. Logarithmic function:Initial guess: .

Function 5. Broyden tridiagonal function:Initial guess: .

Function 6. Trigexp function:Initial guess: .

Function 7. Strictly convex Function 1: is the gradient of .Initial guess: .

Function 8. Variable dimensioned function:Initial guess: .

Function 9. Tridiagonal system:Initial guess: .

Function 10. Five-diagonal system:Initial guess: .

Function 11. Extended Freudenstein and Roth function ( is even):
For i = 1, 2, …,Initial guess: .

Function 12. Brent problem:Initial guess: .
To test the numerical performances of the ATTPRP algorithm, we also perform the experiments with the LS algorithm and the TTPRP algorithm. The columns of Tables 13 have the following meanings:NO: the serial number of the problemDim: the variable dimensionsNI: the number of iterationsNF: the number of iterations of the function valueCPU: the calculation time in secondsGN: the final function norm evaluations when the program is stoppedInitialization: the parameters are chosen as , , , , and Stop rule: when the condition or NI is satisfied, we stop the processFrom Tables 13, it is obvious that the three methods can successfully solve most of the test problems with NI. However, for Function 3 with 9000 and 90000 variables, the TTPRP and LS algorithms cannot handle the function, but the proposed algorithm can do with NI . To more directly show the methods’ performance, Dolan and Moré [36] proposed a drawing tool that can obtain the performance profiles of methods. Therefore, using the drawing tool, we obtain Figures 13, which are related to the NI, NF, and CPU in Tables 13. In Figure 1, the ATTPRP algorithm solves all test problems at approximately , while the LS algorithm solves of the test problems at approximately , and the TTPRP algorithm solves at approximately . Thus, we can obtain the result that the ATTPRP algorithm performs slightly better than the other two algorithms. When in Figure 2, the presented algorithm solves all test problems, the TTPRP algorithm solves of all test problems, and the LS algorithm only solves approximately of the test problems. Thus, it is not difficult to see that the ATTPRP algorithm is more competitive than the other two methods. In Figure 3, the curve of the ATTPRP algorithm is above those of the TTPRP and LS algorithms, which indicates that the proposed algorithm is more robust than the other two algorithms in terms of the CPU. In summary, the enhancement of the presented method is noticeable.

 No. Dim ATTPRP algorithm NI/NF CPU GN 1 3000 123/124 0.608404 9.97E − 06 9000 88/89 0.733205 9.86E − 06 30000 57/58 1.092007 9.96E − 06 90000 38/39 1.310408 9.85E − 06 2 3000 28/514 0.686404 9.50E − 06 9000 18/386 1.372809 9.22E − 06 30000 15/371 3.432022 9.83E − 06 90000 10/276 3.946825 9.95E − 06 3 3000 14016/20709 107.172687 1.00E − 05 9000 15769/23986 331.158923 9.99E − 06 30000 15609/27514 574.09928 1.00E − 05 90000 17136/35208 1503.022835 1.00E − 05 4 3000 63/558 1.029607 5.24E − 07 9000 106/1096 5.241634 8.39E − 07 30000 195/2337 24.258155 1.20E − 07 90000 336/4521 88.031364 2.21E − 08 5 3000 90/615 0.780005 8.52E − 06 9000 96/759 2.464816 7.76E − 06 30000 130/1194 11.154072 8.62E − 06 90000 176/1984 29.250188 8.82E − 06 6 3000 91/1223 1.918812 8.09E − 06 9000 136/2093 8.845257 9.86E − 06 30000 221/3905 38.282645 9.97E − 06 90000 379/7177 125.814807 9.33E − 06 7 3000 48/335 0.468003 9.66E − 06 9000 71/636 2.480416 8.57E − 06 30000 122/1339 13.135284 9.31E − 06 90000 203/2562 43.63348 8.53E − 06 8 3000 1/2 0.000001 0.00E + 00 9000 1/2 0.0468 0.00E + 00 30000 1/2 0.000001 0.00E + 00 90000 1/2 0.0156 0.00E + 00 9 3000 6014/88059 100.885847 9.93E − 06 9000 6437/103883 312.345202 9.91E − 06 30000 7345/137142 1209.803355 9.96E − 06 90000 8950/196012 2763.386114 9.99E − 06 10 3000 1811/19670 25.693365 9.99E − 06 9000 1947/22554 76.36249 9.82E − 06 30000 2202/28399 261.644877 9.95E − 06 90000 2638/38968 577.983705 1.00E − 05 11 3000 351/4205 5.382034 9.78E − 06 9000 409/5194 16.645307 9.44E − 06 30000 510/7139 64.475213 9.64E − 06 90000 683/10789 161.351834 9.52E − 06 12 3000 184/188 0.156001 9.98E − 06 9000 184/188 0.577204 9.98E − 06 30000 184/188 1.107607 9.98E − 06 90000 184/188 3.07322 9.98E − 06
 No. Dim TTPRP algorithm NI/NF CPU GN 1 3000 129/130 0.436803 9.97E − 06 9000 89/90 0.873606 9.96E − 06 30000 59/60 1.232408 9.96E − 06 90000 41/42 1.435209 9.73E − 06 2 3000 46/1055 1.435209 9.99E − 06 9000 10/267 0.951606 6.76E − 06 30000 9/278 2.667617 9.83E − 06 90000 8/279 4.040426 9.00E − 06 3 3000 17401/18908 109.481502 9.98E − 06 9000 19999/23227 355.089476 1.37E − 05 30000 19431/26598 612.241525 9.99E − 06 90000 19999/34409 1542.678289 2.56E − 05 4 3000 70/662 1.123207 3.49E − 06 9000 113/1326 6.099639 2.14E − 06 30000 196/2809 28.766584 6.47E − 07 90000 334/5473 101.884253 3.11E − 06 5 3000 54/464 0.514803 3.38E − 06 9000 72/711 2.152814 3.75E − 06 30000 100/1225 10.99807 7.38E − 06 90000 160/2256 31.403001 5.61E − 06 6 3000 84/1417 2.246414 6.94E − 06 9000 127/2473 10.311666 9.41E − 06 30000 211/4669 45.349491 8.60E − 06 90000 346/8524 141.773709 8.53E − 06 7 3000 45/366 0.546003 2.67E − 06 9000 68/729 2.808018 1.23E − 06 30000 117/1558 14.617294 8.34E − 07 90000 195/3039 48.219909 7.08E − 07 8 3000 1/2 0.0624 0.00E + 00 9000 1/2 0.000001 0.00E + 00 30000 1/2 0.0624 0.00E + 00 90000 1/2 0.0624 0.00E + 00 9 3000 6521/123863 139.277693 9.81E − 06 9000 6827/141140 425.679929 9.97E − 06 30000 7711/182289 1559.916399 9.99E − 06 90000 9107/251132 3460.398582 9.85E − 06 10 3000 4904/65319 77.750898 1.00E − 05 9000 5271/71854 229.820673 9.73E − 06 30000 5280/75948 689.150018 9.97E − 06 90000 5655/88105 1269.302137 9.99E − 06 12 3000 300/4529 4.898431 9.88E − 06 9000 387/6163 17.846514 9.61E − 06 30000 472/8397 72.836867 9.83E − 06 90000 620/12490 168.652681 9.27E − 06 13 3000 193/198 0.124801 9.99E − 06 9000 193/198 0.405603 9.99E − 06 30000 193/198 0.826805 9.99E − 06 90000 193/198 2.589617 9.99E − 06
 No. Dim LS algorithm NI/NF CPU GN 1 3000 174/175 0.982806 9.96E − 06 9000 94/95 0.670804 9.88E − 06 30000 60/61 1.107607 9.93E − 06 90000 41/42 1.248008 9.97E − 06 2 3000 61/1405 1.762811 9.85E − 06 9000 34/916 3.05762 9.98E − 06 30000 15/470 3.946825 9.46E − 06 90000 13/456 6.864044 9.12E − 06 3 3000 19999/21506 125.721206 1.43E − 05 9000 19999/23222 351.267452 1.44E − 05 30000 19999/27153 634.003664 2.24E − 05 90000 19999/34389 1582.942147 1.76E − 05 4 3000 75/666 1.185608 7.17E − 06 9000 118/1331 6.24004 6.87E − 06 30000 202/2813 30.186194 7.82E − 06 90000 338/5467 96.159016 7.65E − 06 5 3000 136/1145 1.404009 8.63E − 06 9000 150/1358 4.305628 9.81E − 06 30000 171/1806 16.801308 5.69E − 06 90000 234/2865 42.08907 9.58E − 06 6 3000 109/1695 2.698817 7.38E − 06 9000 147/2694 11.263272 8.98E − 06 30000 235/4934 47.502305 8.67E − 06 90000 364/8724 147.062143 7.57E − 06 7 3000 69/400 0.530403 6.22E − 06 9000 94/765 2.948419 3.29E − 06 30000 142/1590 15.085297 1.79E − 06 90000 217/3070 50.450723 4.55E − 07 8 3000 1/2 0.0624 0.00E + 00 9000 1/2 0.000001 0.00E + 00 30000 1/2 0.0468 0.00E + 00 90000 1/2 0.0156 0.00E + 00 9 3000 5643/107964 117.484353 9.84E − 06 9000 6131/128776 373.934397 9.86E − 06 30000 6997/169540 1461.323767 9.99E − 06 90000 8449/239554 3276.130201 9.88E − 06 10 3000 4171/55825 65.47362 9.93E − 06 9000 4213/58137 183.113974 1.00E − 05 30000 4616/67344 608.747102 9.88E − 06 90000 4965/79166 1150.008172 9.92E − 06 11 3000 3484/49132 52.073134 9.92E − 06 9000 371/5921 16.504906 9.85E − 06 30000 3625/52579 454.727315 9.93E − 06 90000 3802/57077 801.049535 9.87E − 06 12 3000 194/199 0.109201 9.96E − 06 9000 194/199 0.405603 9.96E − 06 30000 194/199 0.998406 9.96E − 06 90000 194/199 2.776818 9.96E − 06
##### 3.2. Image Restoration Problems

The purpose of this subsection is to recover the original image from an image damaged by impulse noise. It has important practical significance in optimization fields. The selection of parameters is similar to that in the above subsection. The stop condition is or . For the experiments, Cameraman , Barbara , and Man are chosen as the test images. We also perform experiments to compare the ATTPRP algorithm with the TTPRP algorithm, where the step length is generated by Step 2 and Step 3 in the ATTPRP algorithm. More detailed performance results are shown in Figures 46. It is not difficult to see that both the ATTPRP and TTPRP algorithms are successful in the image restoration of the three images. The expenditure of the CPU time is listed in Table 4 to compare the ATTPRP algorithm with the TTPRP algorithm.

 30% noise Cameraman Barbara Man Total ATTPRP algorithm 2.184 4.789 20.514 27.487 TTPRP algorithm 2.23 5.179 20.748 28.157 50% noise Cameraman Barbara Man Total ATTPRP algorithm 3.276 9.142 35.475 47.893 TTPRP algorithm 3.307 9.204 35.677 48.188 70% noise Cameraman Barbara Man Total ATTPRP algorithm 3.619 13.073 58.812 75.504 TTPRP algorithm 3.978 13.4 59.249 76.627

From Figures 14, we can obviously note that both algorithms can perfectly restore a noisy image with 30%, 50%, and 70% salt-and-pepper noise. In addition, the results in Table 4 show that the ATTPRP algorithm and the TTPRP algorithm are both successful in restoring these images with an approximate CPU time. The presented algorithm is slightly competitive with the TTPRP algorithm for 30% noise problems, 50% noise problems, and 70% noise problems.

#### 4. Conclusions

In this paper, an accelerated conjugate gradient algorithm that combines the TTPRP method, the acceleration step length, and the hyperplane projection technique is proposed. All search directions generated by using the algorithm automatically have sufficient descent and trust region properties. The global convergence property of the proposed algorithm is established under suitable conditions. The numerical results show that the proposed algorithm is effective. The image restoration problems also demonstrate that the proposed algorithm is successful

For future research, we have some ideas as follows: (i) If the acceleration system is introduced into the quasi-Newton method, does it have some good properties? (ii) Can the acceleration system be introduced into the trust region method to solve unconstrained optimization problems and nonlinear equations? (iii) Can the proposed algorithm be applied to machine learning?

#### Data Availability

The data used to support the findings of this study are included within the article.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.

#### Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant No. (2019)52), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046).

#### References

1. S. P. Dirkse and M. C. Ferris, “MCPLIB: a collection of nonlinear mixed complementarity problems,” Optimization Methods and Software, vol. 5, no. 4, pp. 319–345, 1995. View at: Publisher Site | Google Scholar
2. A. Griewank, “The “global” convergence of Broyden-like methods with suitable line search,” The Journal of the Australian Mathematical Society. Series B. Applied Mathematics, vol. 28, no. 1, pp. 75–92, 1986. View at: Publisher Site | Google Scholar
3. L. Grippo and M. Sciandrone, “Nonmonotone derivative-free methods for nonlinear equations,” Computational Optimization and Applications, vol. 37, no. 3, pp. 297–328, 2007. View at: Publisher Site | Google Scholar
4. D. Li and M. Fukushima, “A derivative-free line search and DFP method for symmetric equations with global and superlinear convergence,” Numerical Functional Analysis and Optimization, vol. 20, no. 1-2, pp. 59–77, 1999. View at: Google Scholar
5. D. Li and M. Fukushima, “A globally and superlinearly convergent gauss-Newton-based BFGS method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp. 152–172, 1999. View at: Publisher Site | Google Scholar
6. Q. Li and D.-H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis, vol. 31, no. 4, pp. 1625–1635, 2011. View at: Publisher Site | Google Scholar
7. S. Kiefer, M. Luttenberger, and J. Esparza, “On the convergence of Newton’s method for monotone systems of polynomial equations,” in Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, June 2007. View at: Publisher Site | Google Scholar
8. G. N. Silva, “Local convergence of Newton’s method for solving generalized equations with monotone operator,” Applicable Analysis, vol. 97, no. 7, pp. 1094–1105, 2018. View at: Publisher Site | Google Scholar
9. G. Yuan, Z. Wei, and S. Lu, “Limited memory BFGS method with backtracking for symmetric nonlinear equations,” Mathematical and Computer Modelling, vol. 54, no. 1-2, pp. 367–377, 2011. View at: Publisher Site | Google Scholar
10. B. Zhang and Z. Zhu, “A modified quasi-Newton diagonal update algorithm for total variation denoising problems and nonlinear monotone equations with applications in compressive sensing,” Numerical Linear Algebra with Applications, vol. 22, no. 3, pp. 500–522, 2015. View at: Publisher Site | Google Scholar
11. W.-J. Zhou and D.-H. Li, “A globally convergent BFGS method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008. View at: Publisher Site | Google Scholar
12. W. Zhou and D. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 25, no. 1, pp. 89–96, 2007. View at: Google Scholar
13. G. Zhou and K. C. Toh, “Superlinear convergence of a Newton-type algorithm for monotone equations,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 205–221, 2005. View at: Publisher Site | Google Scholar
14. Y. Qiu, C. Ying, and L. Lei, “Multivariate spectral conjugate gradient projection method for nonlinear monotone equations,” in Proceedings of the 2013 Fourth International Conference on Emerging Intelligent Data and Web Technologies (EIDWT), Xi’an, China, September 2013. View at: Google Scholar
15. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006. View at: Publisher Site | Google Scholar
16. Y. Hu and Z. Wei, “Wei-yao-liu conjugate gradient projection algorithm for nonlinear monotone equations with convex constraints,” International Journal of Computer Mathematics, vol. 92, no. 11, pp. 1–12, 2015. View at: Publisher Site | Google Scholar
17. X. Y. Wang, S. J. Li, and X. P. Kou, “A self-adaptive three-term conjugate gradient method for monotone nonlinear equations with convex constraints,” Calcolo, vol. 53, no. 2, pp. 133–145, 2016. View at: Publisher Site | Google Scholar
18. S.-Y. Liu, Y.-Y. Huang, and H.-W. Jiao, “Sufficient descent conjugate gradient methods for solving convex constrained nonlinear monotone equations,” Abstract and Applied Analysis, vol. 2014, no. 1, pp. 1–12, 2014. View at: Publisher Site | Google Scholar
19. G. Yuan and W. Hu, “A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations,” Journal of Inequalities and Applications, vol. 2018, no. 1, p. 113, 2018. View at: Publisher Site | Google Scholar
20. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site | Google Scholar
21. R. Fletcher and C. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site | Google Scholar
22. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue française d'informatique et de recherche opérationnelle. Série rouge, vol. 3, no. 16, pp. 35–43, 1969. View at: Publisher Site | Google Scholar
23. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Publisher Site | Google Scholar
24. Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms, part 1: theory,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 129–137, 1991. View at: Publisher Site | Google Scholar
25. R. Fletcher, Practical Method of Optimization Vol. I: Unconstrained Optimization, John Wiley and Sons, New York, NY, USA, 1987.
26. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at: Publisher Site | Google Scholar
27. L. Zhang, W. Zhou, and D.-H. Li, “A descent modified polak-ribière-polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006. View at: Publisher Site | Google Scholar
28. G. Yuan and M. Zhang, “A three-terms polak-ribière-polyak conjugate gradient algorithm for large-scale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015. View at: Publisher Site | Google Scholar
29. M. Ahookhosh, K. Amini, and S. Bahrami, “Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations,” Numerical Algorithms, vol. 64, no. 1, pp. 21–42, 2013. View at: Publisher Site | Google Scholar
30. M. Koorapetse and P. Kaelo, “Globally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations,” Arabian Journal of Mathematics, vol. 7, no. 1, pp. 1–13, 2018. View at: Publisher Site | Google Scholar
31. J. K. Liu and S. J. Li, “A projection method for convex constrained monotone nonlinear equations with applications,” Computers & Mathematics with Applications, vol. 70, no. 10, pp. 2442–2453, 2015. View at: Publisher Site | Google Scholar
32. J. Liu, S. Li, and S. Li, “Multivariate spectral DY-type projection method for convex constrained nonlinear monotone equations,” Journal of Industrial & Management Optimization, vol. 13, no. 1, pp. 283–295, 2017. View at: Publisher Site | Google Scholar
33. A. A. Goldstein, “Convex programming in hilbert space,” Bulletin of the American Mathematical Society, vol. 70, no. 5, pp. 709–711, 1964. View at: Publisher Site | Google Scholar
34. M. Solodov and B. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Kluwer Academic Publishers, New York, NY, USA, 1998. View at: Google Scholar
35. N. Andrei, “Another conjugate gradient algorithm with guaranteed descent and conjugacy conditions for large-scale unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 159, no. 1, pp. 159–182, 2013. View at: Publisher Site | Google Scholar
36. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar

Copyright © 2020 Haishan Feng and Tingting Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.