Abstract

Combining the three-term conjugate gradient method of Yuan and Zhang and the acceleration step length of Andrei with the hyperplane projection method of Solodov and Svaiter, we propose an accelerated conjugate gradient algorithm for solving nonlinear monotone equations in this paper. The presented algorithm has the following properties: (i) All search directions generated by the algorithm satisfy the sufficient descent and trust region properties independent of the line search technique. (ii) A derivative-free search technique is proposed along the direction to obtain the step length . (iii) If , then an acceleration scheme is used to modify the step length in a multiplicative manner and create a point. (iv) If the point satisfies the given condition, then it is the next point; otherwise, the hyperplane projection technique is used to obtain the next point. (v) The global convergence of the proposed algorithm is established under some suitable conditions. Numerical comparisons with other conjugate gradient algorithms show that the accelerated computing scheme is more competitive. In addition, the presented algorithm can also be applied to image restoration.

1. Introduction

In this paper, the following nonlinear equation is considered:where is continuous and monotone, and satisfies

It is not difficult to show that the solution set of monotone equation (1), unless empty, is convex. This problem has many significant applications in applied mathematics, economics, and engineering. For example, the economic equilibrium problem [1] can be transformed into problem (1). Generally, an iteration formula generates the next iteration point bywhere is the step length and is a search direction, which are two important factors for solving nonlinear equations. Some derivative-free line search techniques [25] were proposed to search for step length . Li and Li [6] presented a derivative-free line search to find such thatwhere and . represents the Euclidean norm. The line search technique (4) is different from other existing derivative-free line search techniques because it does not use a merit function. If satisfies , the inequality (4) holds for all sufficiently small . As a result, can be obtained by some backtracking processes.

For , it is well known that the Newton methods [7, 8], quasi-Newton methods [913], spectral gradient methods [14, 15], and conjugate gradient methods [16, 17] can deal with large-scale nonlinear equations. For solving large-scale optimization problems, the conjugate gradient methods are quite effective since they only calculate and store the gradient value of the objective function. Many scholars have applied conjugate gradient theory to solve nonlinear monotone equations and have achieved good results [1719]. Classical conjugate gradient methods include HS method [20], FR method [21], PRP method [22, 23], LS method [24], CD method [25], and DY method [26]. In particular, the PRP method, as one of the most effective methods, generates a small step near a minimum point; then, the subsequence generated for the search direction will automatically approach the negative gradient direction, which avoids continuously generating a small step. However, the global convergence of the PRP method is not established under inexact line search techniques for general functions. Many scholars have performed continuous research and have reached satisfactory conclusions. Zhang [27] proposed the MPRP method, where is designed as follows:where , , , , and . It is easy to obtain from (5) that

The above equation indicates that is a descent direction of at . If the exact line search is used, then we have . Consequently, formula (5) is inferred to be the standard PRP method. Under some mild conditions, the MPRP method is globally convergent under the Armijo-type line search, but global convergence cannot be established under the weak Wolfe–Powell line search. The main reason is that the MPRP method does not satisfy the trust region property. Inspired by the above discussions, Yuan and Zhang [28] proposed a three-term PRP (TTPRP) method in which is defined bywhere is a constant. It is worth noting that the denominator in the formula of the MPRP method is adjusted to in the formula of the TTPRP method. The TTPRP method automatically maintains the trust region property. Its global convergence is also established under certain conditions. Numerical results show that the TTPRP method is effective for large-scale nonlinear monotone equations.

In addition, the hyperplane projection method [17, 2932] is the most effective method for solving large-scale nonlinear monotone equations. The concept of projection was first proposed by Goldstein [33] for convex programming in a Hilbert space. Furthermore, Solodov and Svaiter [34] proposed the hyperplane projection method for solving optimization problems. The specific process of the hyperplane projection method is as follows: let be the current iteration point, and obtain a point along a certain line search direction such that . Due to the monotonicity of , for a certain point that satisfies , it can be deduced that

Obviously, the hyperplane strictly separates the current iteration point from the solution set of the equation (1). The point is projected onto the hyperplane to obtain the next iteration point , i.e.,

The hyperplane projection method has been proved to possess good theoretical properties and numerical performance for nonlinear monotone equations [6, 34].

Furthermore, we know that the search directions tend to be poorly scaled in conjugate gradient methods. Consequently, in the line search, more function evaluations must be carried out to obtain an appropriate step length . Andrei [35] presented an acceleration scheme that modifies the step length in a multiplicative manner to improve the reduction of the function values along the iterations. The step length is defined as follows:where . , . If , let . A numerical comparison with some conjugate gradient algorithms shows that the computational scheme is effective.

Inspired by the above discussions, we proposed an accelerated conjugate gradient algorithm that combines the TTPRP method, the acceleration step length, and the hyperplane projection method. The main contributions of the algorithm are as follows:An accelerated conjugate gradient algorithm is introduced for solving nonlinear monotone equationsAll search directions of the algorithm satisfy the sufficient descent conditionAll search directions of the algorithm belong to a trust regionThe global convergence of the presented algorithm is provedThe numerical results show that the proposed algorithm is more effective for nonlinear monotone equationsThe algorithm can be applied to restore an original image from an image damaged by impulse noise

This paper is organized as follows: in the next section, we discuss the ATTPRP algorithm and global convergence analysis. In Section 3, we report the preliminary numerical experiments to show that the algorithm is efficient for nonlinear monotone equations and applicable to image restoration problems. In Section 4, the conclusion regarding the proposed algorithm is given.

2. Accelerated Algorithm and Convergence Analysis

In this section, we will propose an accelerated algorithm and prove its global convergence. The steps of the given algorithm are as follows.

2.1. Accelerated Three-Term PRP Conjugate Gradient (ATTPRP) Algorithm

Step 0: choose any as the initial point and constants , , , , and , let .Step 1: stop if . Otherwise, compute by using formula (5).Step 2: choose satisfying the inequality (4).Step 3: if , then .Step 4: let the next iterative value be .Step 5: if , stop and let . Otherwise, determine using formula (9).Step 6: let . Go to Step 1.

The following lemma shows that the search direction designed by using formula (7) has not only the sufficient descent property but also the trust region property independent of the line search.

Lemma 1. is defined by using formula (7); then, we obtain

Proof. If , formulas (7) and (12) are obviously true. If , we obtain from formula (7) thatIn addition, by formula (7), we get andwhere . Then, the proof is completed.
The following assumption need to be established in order to study some properties of the ATTPRP algorithm:

Assumption 1. (i)The solution set of the problem (1) is nonempty.(ii)The function is Lipschitz continuous on ; that is, there exists a positive constant satisfying

Remark 1. Assumption 1(ii) implies that is bounded; then, there exists a constant such thatIn the following paper, if not specifically stated, we always assume that the conditions in Assumption 1 hold.

Lemma 2. Let and be generated by using the ATTPRP algorithm. The step length generated by the ATTPRP algorithm satisfieswhere and .

Proof. By line search (4), assuming , let , by the definition , does not satisfy the line search (4). That is,Since is Lipschitz continuous and by using formula (11), we havenamely,This leads to the ideal inequality (17). The proof is completed.
The following lemma is similar to Lemma 1 in the study of Solodov and Svaiter [34], which also holds for the ATTPRP algorithm. Therefore, we only state it as follows but omit its proof.

Lemma 3. Let the sequence be generated by using the ATTPRP algorithm. Suppose that is a solution of problem (1) with . We obtainIn particular, the sequence is bounded, and

Remark 2. The above lemma reveals that the distance from the iterative points to the solution set of the problem (1) decreases along iterations. Otherwise, for any , it is followed from formulas (9) and (4) thatParticularly, we obtainIn the following part, the global convergence and the strong global convergence properties of the ATTPRP algorithm will be proven.

Theorem 1. Let be generated by using the ATTPRP algorithm. Then, we have

Proof. We will prove this theorem by contradiction. Supposing that the equation (25) does not hold, there exists a constant such that holds for all . From formula (12), we haveAccording to Lemma 3 and equation (24), the sequences and are bounded. By formulas (7) and (16), for all , we obtainwhere . Since converges to zero, the last inequality shows that is bounded. By formulas (12) and (16), we obtainwhere . Thus, from the formulas (16) and (17), we obtain thatThis contradicts with formula (24). Consequently, the proof is completed.
The following theorem indicates the strong global convergence of the ATTPRP algorithm, which is similar to Theorem 1 in [6]. We also give a specific proof for convenience of understanding.

Theorem 2. Let be generated by using the ATTPRP algorithm. Then, the whole sequence converges to a solution of the problem (1).

Proof. Theorem 1 shows that there exists a subsequence of converging to a solution of the problem (1). On the other side, it follows from Lemma 3 that the sequence converges. Therefore, the whole sequence converges to .

3. Numerical Experiments

In this section, the numerical experiments will be divided into two parts for illustration. The first subsection involves normal nonlinear equations, and the second subsection describes image restoration problems. All tests in this section are coded in MATLAB R2017a, run on a PC with Intel (R) Core (TM) i5-4460 3.20 GHz, 8.00 GB of SDRAM memory, and Windows 7 operating system.

3.1. Normal Nonlinear Equations

In this subsection, we perform some numerical experiments to show the effectiveness of the ATTPRP algorithm. Some test problems and their relevant initial points are listed as follows:

Function 1. Exponential Function 1:Initial guess: .

Function 2. Exponential Function 2:Initial guess: .

Function 3. Singular function:Initial guess: .

Function 4. Logarithmic function:Initial guess: .

Function 5. Broyden tridiagonal function:Initial guess: .

Function 6. Trigexp function:Initial guess: .

Function 7. Strictly convex Function 1: is the gradient of .Initial guess: .

Function 8. Variable dimensioned function:Initial guess: .

Function 9. Tridiagonal system:Initial guess: .

Function 10. Five-diagonal system:Initial guess: .

Function 11. Extended Freudenstein and Roth function ( is even):
For i = 1, 2, …,Initial guess: .

Function 12. Brent problem:Initial guess: .
To test the numerical performances of the ATTPRP algorithm, we also perform the experiments with the LS algorithm and the TTPRP algorithm. The columns of Tables 13 have the following meanings:NO: the serial number of the problemDim: the variable dimensionsNI: the number of iterationsNF: the number of iterations of the function valueCPU: the calculation time in secondsGN: the final function norm evaluations when the program is stoppedInitialization: the parameters are chosen as , , , , and Stop rule: when the condition or NI is satisfied, we stop the processFrom Tables 13, it is obvious that the three methods can successfully solve most of the test problems with NI. However, for Function 3 with 9000 and 90000 variables, the TTPRP and LS algorithms cannot handle the function, but the proposed algorithm can do with NI . To more directly show the methods’ performance, Dolan and Moré [36] proposed a drawing tool that can obtain the performance profiles of methods. Therefore, using the drawing tool, we obtain Figures 13, which are related to the NI, NF, and CPU in Tables 13. In Figure 1, the ATTPRP algorithm solves all test problems at approximately , while the LS algorithm solves of the test problems at approximately , and the TTPRP algorithm solves at approximately . Thus, we can obtain the result that the ATTPRP algorithm performs slightly better than the other two algorithms. When in Figure 2, the presented algorithm solves all test problems, the TTPRP algorithm solves of all test problems, and the LS algorithm only solves approximately of the test problems. Thus, it is not difficult to see that the ATTPRP algorithm is more competitive than the other two methods. In Figure 3, the curve of the ATTPRP algorithm is above those of the TTPRP and LS algorithms, which indicates that the proposed algorithm is more robust than the other two algorithms in terms of the CPU. In summary, the enhancement of the presented method is noticeable.

3.2. Image Restoration Problems

The purpose of this subsection is to recover the original image from an image damaged by impulse noise. It has important practical significance in optimization fields. The selection of parameters is similar to that in the above subsection. The stop condition is or . For the experiments, Cameraman , Barbara , and Man are chosen as the test images. We also perform experiments to compare the ATTPRP algorithm with the TTPRP algorithm, where the step length is generated by Step 2 and Step 3 in the ATTPRP algorithm. More detailed performance results are shown in Figures 46. It is not difficult to see that both the ATTPRP and TTPRP algorithms are successful in the image restoration of the three images. The expenditure of the CPU time is listed in Table 4 to compare the ATTPRP algorithm with the TTPRP algorithm.

From Figures 14, we can obviously note that both algorithms can perfectly restore a noisy image with 30%, 50%, and 70% salt-and-pepper noise. In addition, the results in Table 4 show that the ATTPRP algorithm and the TTPRP algorithm are both successful in restoring these images with an approximate CPU time. The presented algorithm is slightly competitive with the TTPRP algorithm for 30% noise problems, 50% noise problems, and 70% noise problems.

4. Conclusions

In this paper, an accelerated conjugate gradient algorithm that combines the TTPRP method, the acceleration step length, and the hyperplane projection technique is proposed. All search directions generated by using the algorithm automatically have sufficient descent and trust region properties. The global convergence property of the proposed algorithm is established under suitable conditions. The numerical results show that the proposed algorithm is effective. The image restoration problems also demonstrate that the proposed algorithm is successful

For future research, we have some ideas as follows: (i) If the acceleration system is introduced into the quasi-Newton method, does it have some good properties? (ii) Can the acceleration system be introduced into the trust region method to solve unconstrained optimization problems and nonlinear equations? (iii) Can the proposed algorithm be applied to machine learning?

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant No. 11661009), the High Level Innovation Teams and Excellent Scholars Program in Guangxi Institutions of Higher Education (Grant No. (2019)52), and the Guangxi Natural Science Key Fund (No. 2017GXNSFDA198046).