Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Machine Learning and its Applications in Image Restoration

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 7945467 | https://doi.org/10.1155/2020/7945467

Haishan Feng, Tingting Li, "An Accelerated Conjugate Gradient Algorithm for Solving Nonlinear Monotone Equations and Image Restoration Problems", Mathematical Problems in Engineering, vol. 2020, Article ID 7945467, 12 pages, 2020. https://doi.org/10.1155/2020/7945467

An Accelerated Conjugate Gradient Algorithm for Solving Nonlinear Monotone Equations and Image Restoration Problems

Guest Editor: Wenjie Liu
Received06 Jul 2020
Accepted25 Aug 2020
Published05 Oct 2020

Abstract

Combining the three-term conjugate gradient method of Yuan and Zhang and the acceleration step length of Andrei with the hyperplane projection method of Solodov and Svaiter, we propose an accelerated conjugate gradient algorithm for solving nonlinear monotone equations in this paper. The presented algorithm has the following properties: (i) All search directions generated by the algorithm satisfy the sufficient descent and trust region properties independent of the line search technique. (ii) A derivative-free search technique is proposed along the direction to obtain the step length . (iii) If , then an acceleration scheme is used to modify the step length in a multiplicative manner and create a point. (iv) If the point satisfies the given condition, then it is the next point; otherwise, the hyperplane projection technique is used to obtain the next point. (v) The global convergence of the proposed algorithm is established under some suitable conditions. Numerical comparisons with other conjugate gradient algorithms show that the accelerated computing scheme is more competitive. In addition, the presented algorithm can also be applied to image restoration.

1. Introduction

In this paper, the following nonlinear equation is considered:where is continuous and monotone, and satisfies

It is not difficult to show that the solution set of monotone equation (1), unless empty, is convex. This problem has many significant applications in applied mathematics, economics, and engineering. For example, the economic equilibrium problem [1] can be transformed into problem (1). Generally, an iteration formula generates the next iteration point bywhere is the step length and is a search direction, which are two important factors for solving nonlinear equations. Some derivative-free line search techniques [25] were proposed to search for step length . Li and Li [6] presented a derivative-free line search to find such thatwhere and . represents the Euclidean norm. The line search technique (4) is different from other existing derivative-free line search techniques because it does not use a merit function. If satisfies , the inequality (4) holds for all sufficiently small . As a result, can be obtained by some backtracking processes.

For , it is well known that the Newton methods [7, 8], quasi-Newton methods [913], spectral gradient methods [14, 15], and conjugate gradient methods [16, 17] can deal with large-scale nonlinear equations. For solving large-scale optimization problems, the conjugate gradient methods are quite effective since they only calculate and store the gradient value of the objective function. Many scholars have applied conjugate gradient theory to solve nonlinear monotone equations and have achieved good results [1719]. Classical conjugate gradient methods include HS method [20], FR method [21], PRP method [22, 23], LS method [24], CD method [25], and DY method [26]. In particular, the PRP method, as one of the most effective methods, generates a small step near a minimum point; then, the subsequence generated for the search direction will automatically approach the negative gradient direction, which avoids continuously generating a small step. However, the global convergence of the PRP method is not established under inexact line search techniques for general functions. Many scholars have performed continuous research and have reached satisfactory conclusions. Zhang [27] proposed the MPRP method, where is designed as follows:where , , , , and . It is easy to obtain from (5) that

The above equation indicates that is a descent direction of at . If the exact line search is used, then we have . Consequently, formula (5) is inferred to be the standard PRP method. Under some mild conditions, the MPRP method is globally convergent under the Armijo-type line search, but global convergence cannot be established under the weak Wolfe–Powell line search. The main reason is that the MPRP method does not satisfy the trust region property. Inspired by the above discussions, Yuan and Zhang [28] proposed a three-term PRP (TTPRP) method in which is defined bywhere is a constant. It is worth noting that the denominator in the formula of the MPRP method is adjusted to in the formula of the TTPRP method. The TTPRP method automatically maintains the trust region property. Its global convergence is also established under certain conditions. Numerical results show that the TTPRP method is effective for large-scale nonlinear monotone equations.

In addition, the hyperplane projection method [17, 2932] is the most effective method for solving large-scale nonlinear monotone equations. The concept of projection was first proposed by Goldstein [33] for convex programming in a Hilbert space. Furthermore, Solodov and Svaiter [34] proposed the hyperplane projection method for solving optimization problems. The specific process of the hyperplane projection method is as follows: let be the current iteration point, and obtain a point along a certain line search direction such that . Due to the monotonicity of , for a certain point that satisfies , it can be deduced that

Obviously, the hyperplane strictly separates the current iteration point from the solution set of the equation (1). The point is projected onto the hyperplane to obtain the next iteration point , i.e.,

The hyperplane projection method has been proved to possess good theoretical properties and numerical performance for nonlinear monotone equations [6, 34].

Furthermore, we know that the search directions tend to be poorly scaled in conjugate gradient methods. Consequently, in the line search, more function evaluations must be carried out to obtain an appropriate step length . Andrei [35] presented an acceleration scheme that modifies the step length in a multiplicative manner to improve the reduction of the function values along the iterations. The step length is defined as follows:where . , . If , let . A numerical comparison with some conjugate gradient algorithms shows that the computational scheme is effective.

Inspired by the above discussions, we proposed an accelerated conjugate gradient algorithm that combines the TTPRP method, the acceleration step length, and the hyperplane projection method. The main contributions of the algorithm are as follows:An accelerated conjugate gradient algorithm is introduced for solving nonlinear monotone equationsAll search directions of the algorithm satisfy the sufficient descent conditionAll search directions of the algorithm belong to a trust regionThe global convergence of the presented algorithm is provedThe numerical results show that the proposed algorithm is more effective for nonlinear monotone equationsThe algorithm can be applied to restore an original image from an image damaged by impulse noise

This paper is organized as follows: in the next section, we discuss the ATTPRP algorithm and global convergence analysis. In Section 3, we report the preliminary numerical experiments to show that the algorithm is efficient for nonlinear monotone equations and applicable to image restoration problems. In Section 4, the conclusion regarding the proposed algorithm is given.

2. Accelerated Algorithm and Convergence Analysis

In this section, we will propose an accelerated algorithm and prove its global convergence. The steps of the given algorithm are as follows.

2.1. Accelerated Three-Term PRP Conjugate Gradient (ATTPRP) Algorithm

Step 0: choose any as the initial point and constants , , , , and , let .Step 1: stop if . Otherwise, compute by using formula (5).Step 2: choose satisfying the inequality (4).Step 3: if , then .Step 4: let the next iterative value be .Step 5: if , stop and let . Otherwise, determine using formula (9).Step 6: let . Go to Step 1.

The following lemma shows that the search direction designed by using formula (7) has not only the sufficient descent property but also the trust region property independent of the line search.

Lemma 1. is defined by using formula (7); then, we obtain

Proof. If , formulas (7) and (12) are obviously true. If , we obtain from formula (7) thatIn addition, by formula (7), we get andwhere . Then, the proof is completed.
The following assumption need to be established in order to study some properties of the ATTPRP algorithm:

Assumption 1. (i)The solution set of the problem (1) is nonempty.(ii)The function is Lipschitz continuous on ; that is, there exists a positive constant satisfying

Remark 1. Assumption 1(ii) implies that is bounded; then, there exists a constant such thatIn the following paper, if not specifically stated, we always assume that the conditions in Assumption 1 hold.

Lemma 2. Let and be generated by using the ATTPRP algorithm. The step length generated by the ATTPRP algorithm satisfieswhere and .

Proof. By line search (4), assuming , let , by the definition , does not satisfy the line search (4). That is,Since is Lipschitz continuous and by using formula (11), we havenamely,This leads to the ideal inequality (17). The proof is completed.
The following lemma is similar to Lemma 1 in the study of Solodov and Svaiter [34], which also holds for the ATTPRP algorithm. Therefore, we only state it as follows but omit its proof.

Lemma 3. Let the sequence be generated by using the ATTPRP algorithm. Suppose that is a solution of problem (1) with . We obtainIn particular, the sequence is bounded, and

Remark 2. The above lemma reveals that the distance from the iterative points to the solution set of the problem (1) decreases along iterations. Otherwise, for any , it is followed from formulas (9) and (4) thatParticularly, we obtainIn the following part, the global convergence and the strong global convergence properties of the ATTPRP algorithm will be proven.

Theorem 1. Let be generated by using the ATTPRP algorithm. Then, we have

Proof. We will prove this theorem by contradiction. Supposing that the equation (25) does not hold, there exists a constant such that holds for all . From formula (12), we haveAccording to Lemma 3 and equation (24), the sequences and are bounded. By formulas (7) and (16), for all , we obtainwhere . Since converges to zero, the last inequality shows that is bounded. By formulas (12) and (16), we obtainwhere . Thus, from the formulas (16) and (17), we obtain thatThis contradicts with formula (24). Consequently, the proof is completed.
The following theorem indicates the strong global convergence of the ATTPRP algorithm, which is similar to Theorem 1 in [6]. We also give a specific proof for convenience of understanding.

Theorem 2. Let be generated by using the ATTPRP algorithm. Then, the whole sequence converges to a solution of the problem (1).

Proof. Theorem 1 shows that there exists a subsequence of converging to a solution of the problem (1). On the other side, it follows from Lemma 3 that the sequence converges. Therefore, the whole sequence converges to .

3. Numerical Experiments

In this section, the numerical experiments will be divided into two parts for illustration. The first subsection involves normal nonlinear equations, and the second subsection describes image restoration problems. All tests in this section are coded in MATLAB R2017a, run on a PC with Intel (R) Core (TM) i5-4460 3.20 GHz, 8.00 GB of SDRAM memory, and Windows 7 operating system.

3.1. Normal Nonlinear Equations

In this subsection, we perform some numerical experiments to show the effectiveness of the ATTPRP algorithm. Some test problems and their relevant initial points are listed as follows:

Function 1. Exponential Function 1:Initial guess: .

Function 2. Exponential Function 2:Initial guess: .

Function 3. Singular function:Initial guess: .

Function 4. Logarithmic function:Initial guess: .

Function 5. Broyden tridiagonal function:Initial guess: .

Function 6. Trigexp function:Initial guess: .

Function 7. Strictly convex Function 1: is the gradient of .Initial guess: .

Function 8. Variable dimensioned function:Initial guess: .

Function 9. Tridiagonal system:Initial guess: .

Function 10. Five-diagonal system:Initial guess: .

Function 11. Extended Freudenstein and Roth function ( is even):
For i = 1, 2, …,Initial guess: .

Function 12. Brent problem:Initial guess: .
To test the numerical performances of the ATTPRP algorithm, we also perform the experiments with the LS algorithm and the TTPRP algorithm. The columns of Tables 13 have the following meanings:NO: the serial number of the problemDim: the variable dimensionsNI: the number of iterationsNF: the number of iterations of the function valueCPU: the calculation time in secondsGN: the final function norm evaluations when the program is stoppedInitialization: the parameters are chosen as , , , , and Stop rule: when the condition or NI is satisfied, we stop the processFrom Tables 13, it is obvious that the three methods can successfully solve most of the test problems with NI. However, for Function 3 with 9000 and 90000 variables, the TTPRP and LS algorithms cannot handle the function, but the proposed algorithm can do with NI . To more directly show the methods’ performance, Dolan and Moré [36] proposed a drawing tool that can obtain the performance profiles of methods. Therefore, using the drawing tool, we obtain Figures 13, which are related to the NI, NF, and CPU in Tables 13. In Figure 1, the ATTPRP algorithm solves all test problems at approximately , while the LS algorithm solves of the test problems at approximately , and the TTPRP algorithm solves at approximately . Thus, we can obtain the result that the ATTPRP algorithm performs slightly better than the other two algorithms. When in Figure 2, the presented algorithm solves all test problems, the TTPRP algorithm solves of all test problems, and the LS algorithm only solves approximately of the test problems. Thus, it is not difficult to see that the ATTPRP algorithm is more competitive than the other two methods. In Figure 3, the curve of the ATTPRP algorithm is above those of the TTPRP and LS algorithms, which indicates that the proposed algorithm is more robust than the other two algorithms in terms of the CPU. In summary, the enhancement of the presented method is noticeable.


No.DimATTPRP algorithm
NI/NFCPUGN

13000123/1240.6084049.97E 06
900088/890.7332059.86E 06
3000057/581.0920079.96E 06
9000038/391.3104089.85E 06

2300028/5140.6864049.50E 06
900018/3861.3728099.22E 06
3000015/3713.4320229.83E 06
9000010/2763.9468259.95E 06

3300014016/20709107.1726871.00E 05
900015769/23986331.1589239.99E 06
3000015609/27514574.099281.00E 05
9000017136/352081503.0228351.00E 05

4300063/5581.0296075.24E 07
9000106/10965.2416348.39E 07
30000195/233724.2581551.20E 07
90000336/452188.0313642.21E 08

5300090/6150.7800058.52E 06
900096/7592.4648167.76E 06
30000130/119411.1540728.62E 06
90000176/198429.2501888.82E 06

6300091/12231.9188128.09E 06
9000136/20938.8452579.86E 06
30000221/390538.2826459.97E 06
90000379/7177125.8148079.33E 06

7300048/3350.4680039.66E 06
900071/6362.4804168.57E 06
30000122/133913.1352849.31E 06
90000203/256243.633488.53E 06

830001/20.0000010.00E + 00
90001/20.04680.00E + 00
300001/20.0000010.00E + 00
900001/20.01560.00E + 00

930006014/88059100.8858479.93E − 06
90006437/103883312.3452029.91E − 06
300007345/1371421209.8033559.96E − 06
900008950/1960122763.3861149.99E − 06

1030001811/1967025.6933659.99E − 06
90001947/2255476.362499.82E − 06
300002202/28399261.6448779.95E − 06
900002638/38968577.9837051.00E − 05

113000351/42055.3820349.78E − 06
9000409/519416.6453079.44E − 06
30000510/713964.4752139.64E − 06
90000683/10789161.3518349.52E − 06

123000184/1880.1560019.98E − 06
9000184/1880.5772049.98E − 06
30000184/1881.1076079.98E − 06
90000184/1883.073229.98E − 06


No.DimTTPRP algorithm
NI/NFCPUGN

13000129/1300.4368039.97E − 06
900089/900.8736069.96E − 06
3000059/601.2324089.96E − 06
9000041/421.4352099.73E − 06

2300046/10551.4352099.99E − 06
900010/2670.9516066.76E − 06
300009/2782.6676179.83E − 06
900008/2794.0404269.00E − 06

3300017401/18908109.4815029.98E − 06
900019999/23227355.0894761.37E − 05
3000019431/26598612.2415259.99E − 06
9000019999/344091542.6782892.56E − 05

4300070/6621.1232073.49E − 06
9000113/13266.0996392.14E − 06
30000196/280928.7665846.47E − 07
90000334/5473101.8842533.11E − 06

5300054/4640.5148033.38E − 06
900072/7112.1528143.75E − 06
30000100/122510.998077.38E − 06
90000160/225631.4030015.61E − 06

6300084/14172.2464146.94E − 06
9000127/247310.3116669.41E − 06
30000211/466945.3494918.60E − 06
90000346/8524141.7737098.53E − 06

7300045/3660.5460032.67E − 06
900068/7292.8080181.23E − 06
30000117/155814.6172948.34E − 07
90000195/303948.2199097.08E − 07

830001/20.06240.00E + 00
90001/20.0000010.00E + 00
300001/20.06240.00E + 00
900001/20.06240.00E + 00

930006521/123863139.2776939.81E − 06
90006827/141140425.6799299.97E − 06
300007711/1822891559.9163999.99E − 06
900009107/2511323460.3985829.85E − 06

1030004904/6531977.7508981.00E − 05
90005271/71854229.8206739.73E − 06
300005280/75948689.1500189.97E − 06
900005655/881051269.3021379.99E − 06

123000300/45294.8984319.88E − 06
9000387/616317.8465149.61E − 06
30000472/839772.8368679.83E − 06
90000620/12490168.6526819.27E − 06

133000193/1980.1248019.99E − 06
9000193/1980.4056039.99E − 06
30000193/1980.8268059.99E − 06
90000193/1982.5896179.99E − 06


No.DimLS algorithm
NI/NFCPUGN

13000174/1750.9828069.96E − 06
900094/950.6708049.88E − 06
3000060/611.1076079.93E − 06
9000041/421.2480089.97E − 06

2300061/14051.7628119.85E − 06
900034/9163.057629.98E − 06
3000015/4703.9468259.46E − 06
9000013/4566.8640449.12E − 06

3300019999/21506125.7212061.43E − 05
900019999/23222351.2674521.44E − 05
3000019999/27153634.0036642.24E − 05
9000019999/343891582.9421471.76E − 05

4300075/6661.1856087.17E − 06
9000118/13316.240046.87E − 06
30000202/281330.1861947.82E − 06
90000338/546796.1590167.65E − 06

53000136/11451.4040098.63E − 06
9000150/13584.3056289.81E − 06
30000171/180616.8013085.69E − 06
90000234/286542.089079.58E − 06

63000109/16952.6988177.38E − 06
9000147/269411.2632728.98E − 06
30000235/493447.5023058.67E − 06
90000364/8724147.0621437.57E − 06

7300069/4000.5304036.22E − 06
900094/7652.9484193.29E − 06
30000142/159015.0852971.79E − 06
90000217/307050.4507234.55E − 07

830001/20.06240.00E + 00
90001/20.0000010.00E + 00
300001/20.04680.00E + 00
900001/20.01560.00E + 00

930005643/107964117.4843539.84E − 06
90006131/128776373.9343979.86E − 06
300006997/1695401461.3237679.99E − 06
900008449/2395543276.1302019.88E − 06

1030004171/5582565.473629.93E − 06
90004213/58137183.1139741.00E − 05
300004616/67344608.7471029.88E − 06
900004965/791661150.0081729.92E − 06

1130003484/4913252.0731349.92E − 06
9000371/592116.5049069.85E − 06
300003625/52579454.7273159.93E − 06
900003802/57077801.0495359.87E − 06

123000194/1990.1092019.96E − 06
9000194/1990.4056039.96E − 06
30000194/1990.9984069.96E − 06
90000194/1992.7768189.96E − 06

3.2. Image Restoration Problems

The purpose of this subsection is to recover the original image from an image damaged by impulse noise. It has important practical significance in optimization fields. The selection of parameters is similar to that in the above subsection. The stop condition is or . For the experiments, Cameraman , Barbara , and Man are chosen as the test images. We also perform experiments to compare the ATTPRP algorithm with the TTPRP algorithm, where the step length is generated by Step 2 and Step 3 in the ATTPRP algorithm. More detailed performance results are shown in Figures 46. It is not difficult to see that both the ATTPRP and TTPRP algorithms are successful in the image restoration of the three images. The expenditure of the CPU time is listed in Table 4 to compare the ATTPRP algorithm with the TTPRP algorithm.


30% noiseCameramanBarbaraManTotal

ATTPRP algorithm2.1844.78920.51427.487
TTPRP algorithm2.235.17920.74828.157
50% noiseCameramanBarbaraManTotal
ATTPRP algorithm3.2769.14235.47547.893
TTPRP algorithm3.3079.20435.67748.188
70% noiseCameramanBarbaraManTotal
ATTPRP algorithm3.61913.07358.81275.504
TTPRP algorithm3.97813.459.24976.627

From Figures 14, we can obviously note that both algorithms can perfectly restore a noisy image with 30%, 50%, and 70% salt-and-pepper noise. In addition, the results in Table 4 show that the ATTPRP algorithm and the TTPRP algorithm are both successful in restoring these images with an approximate CPU time. The presented algorithm is slightly competitive with the TTPRP algorithm for 30% noise problems, 50% noise problems, and 70% noise problems.

4. Conclusions

In this paper, an accelerated conjugate gradient algorithm that combines the TTPRP method, the acceleration step length, and the hyperplane projection technique is proposed. All search directions generated by using the algorithm automatically have sufficient descent and trust region properties. The global convergence property of the proposed algorithm is established under suitable conditions. The numerical results show that the proposed algorithm is effective. The image restoration problems also demonstrate that the proposed algorithm is successful

For future research, we have some ideas as follows: (i) If the acceleration system is introduced into the quasi-Newton method, does it have some good properties? (ii) Can the acceleration system be introduced into the trust region method to solve unconstrained optimization problems and nonlinear equations? (iii) Can the proposed algorithm be applied to machine learning?

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant No. (2019)52), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046).

References

  1. S. P. Dirkse and M. C. Ferris, “MCPLIB: a collection of nonlinear mixed complementarity problems,” Optimization Methods and Software, vol. 5, no. 4, pp. 319–345, 1995. View at: Publisher Site | Google Scholar
  2. A. Griewank, “The “global” convergence of Broyden-like methods with suitable line search,” The Journal of the Australian Mathematical Society. Series B. Applied Mathematics, vol. 28, no. 1, pp. 75–92, 1986. View at: Publisher Site | Google Scholar
  3. L. Grippo and M. Sciandrone, “Nonmonotone derivative-free methods for nonlinear equations,” Computational Optimization and Applications, vol. 37, no. 3, pp. 297–328, 2007. View at: Publisher Site | Google Scholar
  4. D. Li and M. Fukushima, “A derivative-free line search and DFP method for symmetric equations with global and superlinear convergence,” Numerical Functional Analysis and Optimization, vol. 20, no. 1-2, pp. 59–77, 1999. View at: Google Scholar
  5. D. Li and M. Fukushima, “A globally and superlinearly convergent gauss-Newton-based BFGS method for symmetric nonlinear equations,” SIAM Journal on Numerical Analysis, vol. 37, no. 1, pp. 152–172, 1999. View at: Publisher Site | Google Scholar
  6. Q. Li and D.-H. Li, “A class of derivative-free methods for large-scale nonlinear monotone equations,” IMA Journal of Numerical Analysis, vol. 31, no. 4, pp. 1625–1635, 2011. View at: Publisher Site | Google Scholar
  7. S. Kiefer, M. Luttenberger, and J. Esparza, “On the convergence of Newton’s method for monotone systems of polynomial equations,” in Proceedings of the 39th Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, June 2007. View at: Publisher Site | Google Scholar
  8. G. N. Silva, “Local convergence of Newton’s method for solving generalized equations with monotone operator,” Applicable Analysis, vol. 97, no. 7, pp. 1094–1105, 2018. View at: Publisher Site | Google Scholar
  9. G. Yuan, Z. Wei, and S. Lu, “Limited memory BFGS method with backtracking for symmetric nonlinear equations,” Mathematical and Computer Modelling, vol. 54, no. 1-2, pp. 367–377, 2011. View at: Publisher Site | Google Scholar
  10. B. Zhang and Z. Zhu, “A modified quasi-Newton diagonal update algorithm for total variation denoising problems and nonlinear monotone equations with applications in compressive sensing,” Numerical Linear Algebra with Applications, vol. 22, no. 3, pp. 500–522, 2015. View at: Publisher Site | Google Scholar
  11. W.-J. Zhou and D.-H. Li, “A globally convergent BFGS method for nonlinear monotone equations without any merit functions,” Mathematics of Computation, vol. 77, no. 264, pp. 2231–2240, 2008. View at: Publisher Site | Google Scholar
  12. W. Zhou and D. Li, “Limited memory BFGS method for nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 25, no. 1, pp. 89–96, 2007. View at: Google Scholar
  13. G. Zhou and K. C. Toh, “Superlinear convergence of a Newton-type algorithm for monotone equations,” Journal of Optimization Theory and Applications, vol. 125, no. 1, pp. 205–221, 2005. View at: Publisher Site | Google Scholar
  14. Y. Qiu, C. Ying, and L. Lei, “Multivariate spectral conjugate gradient projection method for nonlinear monotone equations,” in Proceedings of the 2013 Fourth International Conference on Emerging Intelligent Data and Web Technologies (EIDWT), Xi’an, China, September 2013. View at: Google Scholar
  15. L. Zhang and W. Zhou, “Spectral gradient projection method for solving nonlinear monotone equations,” Journal of Computational and Applied Mathematics, vol. 196, no. 2, pp. 478–484, 2006. View at: Publisher Site | Google Scholar
  16. Y. Hu and Z. Wei, “Wei-yao-liu conjugate gradient projection algorithm for nonlinear monotone equations with convex constraints,” International Journal of Computer Mathematics, vol. 92, no. 11, pp. 1–12, 2015. View at: Publisher Site | Google Scholar
  17. X. Y. Wang, S. J. Li, and X. P. Kou, “A self-adaptive three-term conjugate gradient method for monotone nonlinear equations with convex constraints,” Calcolo, vol. 53, no. 2, pp. 133–145, 2016. View at: Publisher Site | Google Scholar
  18. S.-Y. Liu, Y.-Y. Huang, and H.-W. Jiao, “Sufficient descent conjugate gradient methods for solving convex constrained nonlinear monotone equations,” Abstract and Applied Analysis, vol. 2014, no. 1, pp. 1–12, 2014. View at: Publisher Site | Google Scholar
  19. G. Yuan and W. Hu, “A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations,” Journal of Inequalities and Applications, vol. 2018, no. 1, p. 113, 2018. View at: Publisher Site | Google Scholar
  20. M. R. Hestenes and E. Stiefel, “Methods of conjugate gradients for solving linear systems,” Journal of Research of the National Bureau of Standards, vol. 49, no. 6, pp. 409–436, 1952. View at: Publisher Site | Google Scholar
  21. R. Fletcher and C. Reeves, “Function minimization by conjugate gradients,” The Computer Journal, vol. 7, no. 2, pp. 149–154, 1964. View at: Publisher Site | Google Scholar
  22. E. Polak and G. Ribière, “Note sur la convergence de méthodes de directions conjuguées,” Revue française d'informatique et de recherche opérationnelle. Série rouge, vol. 3, no. 16, pp. 35–43, 1969. View at: Publisher Site | Google Scholar
  23. B. T. Polyak, “The conjugate gradient method in extremal problems,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 4, pp. 94–112, 1969. View at: Publisher Site | Google Scholar
  24. Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms, part 1: theory,” Journal of Optimization Theory and Applications, vol. 69, no. 1, pp. 129–137, 1991. View at: Publisher Site | Google Scholar
  25. R. Fletcher, Practical Method of Optimization Vol. I: Unconstrained Optimization, John Wiley and Sons, New York, NY, USA, 1987.
  26. Y. H. Dai and Y. Yuan, “A nonlinear conjugate gradient method with a strong global convergence property,” SIAM Journal on Optimization, vol. 10, no. 1, pp. 177–182, 1999. View at: Publisher Site | Google Scholar
  27. L. Zhang, W. Zhou, and D.-H. Li, “A descent modified polak-ribière-polyak conjugate gradient method and its global convergence,” IMA Journal of Numerical Analysis, vol. 26, no. 4, pp. 629–640, 2006. View at: Publisher Site | Google Scholar
  28. G. Yuan and M. Zhang, “A three-terms polak-ribière-polyak conjugate gradient algorithm for large-scale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015. View at: Publisher Site | Google Scholar
  29. M. Ahookhosh, K. Amini, and S. Bahrami, “Two derivative-free projection approaches for systems of large-scale nonlinear monotone equations,” Numerical Algorithms, vol. 64, no. 1, pp. 21–42, 2013. View at: Publisher Site | Google Scholar
  30. M. Koorapetse and P. Kaelo, “Globally convergent three-term conjugate gradient projection methods for solving nonlinear monotone equations,” Arabian Journal of Mathematics, vol. 7, no. 1, pp. 1–13, 2018. View at: Publisher Site | Google Scholar
  31. J. K. Liu and S. J. Li, “A projection method for convex constrained monotone nonlinear equations with applications,” Computers & Mathematics with Applications, vol. 70, no. 10, pp. 2442–2453, 2015. View at: Publisher Site | Google Scholar
  32. J. Liu, S. Li, and S. Li, “Multivariate spectral DY-type projection method for convex constrained nonlinear monotone equations,” Journal of Industrial & Management Optimization, vol. 13, no. 1, pp. 283–295, 2017. View at: Publisher Site | Google Scholar
  33. A. A. Goldstein, “Convex programming in hilbert space,” Bulletin of the American Mathematical Society, vol. 70, no. 5, pp. 709–711, 1964. View at: Publisher Site | Google Scholar
  34. M. Solodov and B. Svaiter, “A globally convergent inexact Newton method for systems of monotone equations,” in Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, pp. 355–369, Kluwer Academic Publishers, New York, NY, USA, 1998. View at: Google Scholar
  35. N. Andrei, “Another conjugate gradient algorithm with guaranteed descent and conjugacy conditions for large-scale unconstrained optimization,” Journal of Optimization Theory and Applications, vol. 159, no. 1, pp. 159–182, 2013. View at: Publisher Site | Google Scholar
  36. E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002. View at: Publisher Site | Google Scholar

Copyright © 2020 Haishan Feng and Tingting Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views165
Downloads208
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.