Machine Learning and its Applications in Image RestorationView this Special Issue
Research Article | Open Access
Zhou Sheng, Dan Luo, "A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations", Mathematical Problems in Engineering, vol. 2020, Article ID 4072651, 9 pages, 2020. https://doi.org/10.1155/2020/4072651
A Cauchy Point Direction Trust Region Algorithm for Nonlinear Equations
In this paper, a Cauchy point direction trust region algorithm is presented to solve nonlinear equations. The search direction is an optimal convex combination of the trust region direction and the Cauchy point direction with the sufficiently descent property and the automatic trust region property. The global convergence of the proposed algorithm is proven under some conditions. The preliminary numerical results demonstrate that the proposed algorithm is promising and has better convergence behaviors than the other two existing algorithms for solving large-scale nonlinear equations.
The system of nonlinear equations is one of the most important mathematical models, with extensive applications in chemical equilibrium problems , problem of hydraulic circuits of power plants , L-shaped beam structures , and image restoration problems . Some -norm regularized optimization problems in compressive sensing  are also obtained by reformulating the systems of nonlinear equations. In this paper, consider the following nonlinear system of equation:where is continuously differentiable. Letwhere . It is clear that the nonlinear system of equations (1) is equivalent to the unconstrained optimization problem as follows:
Many literature studies have studied a variety of numerical algorithms to solve the nonlinear system of equations, such as the conjugate gradient algorithm [6–9], Levenberg–Marquardt algorithm [10–13], and trust region algorithm [14–21]. The trust region method plays an important role in the area of nonlinear optimization. It was proposed by Powell in  to solve the nonlinear optimization problem by using an iterative structure. Recently, some trust region-based methods have been developed to solve nonlinear equations and have shown very promising results. For instance, Fan and Pan  discussed a regularized trust region subproblem with an adaptive trust region radius, which possesses quadratic convergence under the local error bound condition. Yuan et al.  proposed a trust region-based algorithm, in which the Jacobian is updated by the BFGS formula, for solving nonlinear equations. The algorithm showed global convergence at a quadratic convergence rate. This algorithm does not compute the Jacobian matrix in each interrogation, which significantly reduces the computation burden. Esmaeili and Kimiaei  studied an adaptive trust region algorithm with the moderate radius size by employing the nonmonotone technique, and the global and quadratic local convergence was given. However, the maximal dimension of the test problems was only 300. Qi et al.  proposed an asymptotic Newton search direction method which is an adequate combination of the projected gradient direction and the projected trust region direction for solving box-constrained nonsmooth equations and proved the global and quadratic local convergence. Motivated by the search direction by Qi et al., Yuan et al.  used the idea of the search direction to solve nonsmooth unconstrained optimization problems and also showed global convergence.
Inspired by the fact that the search direction by Qi et al.  speeds up the local convergence rate compared to the gradient direction, we propose an efficient Cauchy point direction trust region algorithm to solve (1) in this paper. The search direction is an optimal convex combination of the trust region direction and the Cauchy point direction with the automatic trust region property and the sufficiently descent property, which can improve the numerical performance. We also show the global convergence of the proposed algorithm. Numerical results indicate that this algorithm has better behavior convergence compared with that of the classical trust region algorithm and the adaptive regularized trust region algorithm. However, the local convergence is not given.
The remainder of this paper is organized as follows. In Section 2, we introduce our motivation and a new search direction. In Section 3, we give a Cauchy point direction trust region algorithm for solving problem (1). The global convergence of the proposed algorithm is proven in Section 4. Preliminary numerical results are reported in Section 5. We conclude our paper in Section 6.
Throughout this paper, unless otherwise specified, denotes the Euclidean norm of vectors or matrices.
2. Motivation and Search Direction
In this section, motivated by the definition of the Cauchy point which is parallel to the steepest descent direction, we present a search direction with the Cauchy point. Let be the trust region radius and .
2.1. Cauchy Point Direction
Noting that is a semidefinite matrix, we define the Cauchy point at the current point :where, and is the Jacobian of or its approximation.
The following lemma gives a nice property of the Cauchy point direction .
Proposition 1 (automatic trust region property). Suppose that is defined by (4), then .
Proof. It is easy to get the result of this proposition from . This completes the proof.
From the definition of and the above proposition, we can determine the descent property and the automatic trust region property of the Cauchy point direction .
2.2. Trust Region Direction
Let the trial step be a solution of the following trust region subproblem:where is the same as the above definition.
From the famous result of Powell , we give the following lemma and omit its proof.
Lemma 1. Let be a solution of the trust region subproblem (6), then
As mentioned by Qi et al.  and Yuan et al. , when is far from the optimization solution, the trust region direction may not possess the descent property. However, it follows from the definition of the Cauchy point direction that always possesses the descent property. On the contrary, if the search direction is defined by using a combination of and , it will be a descent direction. Hence, we give a new search direction by using a convex combination form in the next section.
2.3. Search Direction
Let be a solution of the one-dimensional minimization problem as follows:and the search direction is defined as
For the above one-dimensional minimization problem, we can use the gold section method, Fibonacci method, parabolic method , etc. The idea of the search direction follows from Qi et al. . However, we use the Cauchy point direction which possesses the automatic trust region property to get optimal by using the gold section method for problem (6) in this paper.
3. New Algorithm
In this section, let be the optimal solution of (3). Then, the actual reduction is defined asand the predicted reduction is
Thus, is defined by the ratio between and :
We now list the detailed steps of the Cauchy point direction trust region algorithm in Algorithm 1.
Algorithm 1. Cauchy point direction trust region algorithm for solving (3). Step 1. Given initial point and parameters , , , , and . Let . Step 2. If the termination condition is satisfied at the iteration point , then stop. Otherwise, go to Step 3. Step 3. Solve the trust region subproblem (6) to obtain and compute and from (4) and (5), respectively. Step 4. To solve the one-dimensional minimization problem (8) giving by using Algorithm 2, calculate the search direction from (6). Step 5. Compute , if are satisfied; let , and update by Step 6. If , return to Step 3, and let . Otherwise, return to Step 2. Let .In Step 4 of Algorithm 1, we use the gold section algorithm as given in Algorithm 2.
Algorithm 2. Gold section algorithm  for solving problem (8). Step 1. For the given initial parameters , compute and . Let . Step 2. If , go to Step 3. Otherwise, go to Step 4. Step 3. If the termination condition is satisfied, stop. Otherwise, let and compute and go to Step 5. Step 4. If the termination condition is satisfied, stop. Otherwise, let and compute and go to Step 5. Step 5. Let ; return to Step 2.
4. Global Convergence
Proof. From the definition of , we have . It follows from the definition of and Lemma 1 that holds. Thus, (16) holds, and therefore, the proof is completed.
The next lemma indicates an important property of the Cauchy point direction , which is the sufficiently descent property.
Lemma 3. holds, for any .
Assumption 1. The Jacobian of is bounded; i.e., there exists a positive constant such that , for any .
In the sequel, we show that “Steps 3–6” of Algorithm 1 do not cycle in the inner loop infinitely.
Lemma 4. Suppose that Assumption 1 holds, let be generated by Algorithm 1. If is not a stationary point of problem (3), then Algorithm 1 is well defined; i.e., the inner cycle of Algorithm 1 can be left in a finite number of internal iterates.
Proof. It follows from which is not a stationary point that there exists a positive constant such thatSince and Assumption 1 holds, there exists a positive constant such thatLetting , then from (4) and (22), we obtainand noting (21), we have , which impliesand therefore, (13) holds. Next, we prove that (14) is also satisfied. By contradiction, we assume that “Step 3–Step 6” of Algorithm 1 do cycle in the inner loop infinitely; then, .
From the definition of , we getIt follows from (13) and Lemma 3 that there exists a positive constant such thatBy the definition , we have , and noting Proposition 1 and , further, we getThus, from (26) and (27), we haveand therefore , which means that when is sufficiently small. However, the definition of in Step 5 of Algorithm 1 contradicts the fact that . Hence, the cycling between Step 3 and Step 6 terminates finitely. This completes the proof.
We can get the global convergence of Algorithm 1 based on the above lemmas.
Proof. Suppose that the conclusion is not true, then there exists a positive constant such that for any ,From Lemma 2, Assumption 1, and the above inequation, we haveNoting the updating rule of , , (31), and Lemma 4, we deduce thatwhich yields a contradiction that this means (30) is not true; i.e., the result (29) of the theorem holds, and therefore, the proof is completed.
5. Numerical Results
In this section, the numerical results of the proposed algorithm for solving problem (1) are reported. We denote the proposed Algorithm 1 by CTR and compare it with some existing algorithms. The adaptive trust region algorithm by Fan and Pan  and the classical trust region algorithm  are denoted by ATR and TTR, respectively. In numerical experiments, all parameters of Algorithm 1 are chosen as follows: , , , , , and . We choose in Algorithm 2. Steihaug method  is employed to obtain the approximative trial step for solving the trust region subproblem (6). We will terminate algorithms if the number of iterations is larger than 2000.
All codes were implemented in MATLAB R2010b. The numerical experiments were performed on a PC with an Intel Pentium (R) Dual-Core CPU at 3.20 GHz and 2.00 GB of RAM and using the Windows 7 operating system.
We list the test problems in Table 1 that can also be found in . In Tables 1–3, “No.” denotes the number of the test problem, “Problem name” denotes the name of the test problem, “” denotes the initial point, “” denotes the dimension of the problem, “NI” is the total number of iterations, “NF” is the number of the function evaluations, “” refers to the optimum function values, and “” denotes the mean value of . It is worth mentioning that “” represents the stopping of the algorithms in situations that the number of iterations exceeds the maximal iterations, but the terminal conditions cannot be satisfied yet.
It is shown from Tables 2 and 3 that the proposed algorithm is efficient to solve these test problems and is competitive with the other two algorithms in NI and NF for most problems. For problems 1, 2, 16, 19, and 20, is close to zero, which implies that the trust region direction was used many times. Moreover, the number of iteration is the same as that of algorithm TTR, which means that our algorithm speeds up the local convergence of the iterates. It can be seen that is not too large for other test problems. However, for problems 4 and 8, algorithms ATR and TTR are not efficient to get the optimal solution when . We used the performance profiles by Dolan and Moré  to compare these algorithms with respect to NI and NF again, as shown in Figures 1 and 2. Therefore, it is noted from Tables 2 and 3 and Figures 1 and 2 that the proposed algorithm is well promising and has better performance in test problems in comparison with two existing algorithms. In order to investigate the behavior of the convergence rate of three algorithms, we plot the curve on problem 10 with and in Figure 3, where is the optimal point. Figure 3 indicates that Algorithm 1 converges faster than others.
In this paper, we presented a Cauchy point direction trust region algorithm for solving the nonlinear system of equations. This algorithm combines the trust region direction and the Cauchy point direction with the descent property and the automatic trust region. The optimal convex combination parameters were determined by using the gold section algorithm which is an exact one-dimensional linear search algorithm. The new algorithm was proven to have the descent property and the trust region property. We also showed the global convergence of the proposed algorithm under suitable condition. More importantly, the numerical results demonstrated the faster convergence of the proposed algorithm over two existing methods, which is very promising to solve large-scale nonlinear equations. However, the convergence rate of the proposed algorithm remains unclear. Therefore, the demonstration of the theoretical convergence rate of the proposed algorithm, application in image restoration problems, and compressive sensing will be the topics among others in our future study.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
This work was supported by the National Natural Science Foundation of China (11261006 and 11661009) and the Guangxi Natural Science Foundation (2020GXNSFAA159069).
- K. Meintjes and A. P. Morgan, “Chemical equilibrium systems as numerical test problems,” ACM Transactions on Mathematical Software, vol. 16, no. 2, pp. 143–151, 1990.
- A. A. Levin, V. Chistyakov, E. A. Tairov, and V. F. Chistyakov, “On application of the structure of the nonlinear equations system, describing hydraulic circuits of power plants, in computations,” Bulletin of the South Ural State University. Series “Mathematical Modelling, Programming and Computer Software, vol. 9, no. 4, pp. 53–62, 2016.
- F. Georgiades, “Nonlinear equations of motion of L-shaped beam structures,” European Journal of Mechanics—A/Solids, vol. 65, pp. 91–122, 2017.
- G. Yuan, T. Li, and W. Hu, “A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems,” Applied Numerical Mathematics, vol. 147, pp. 129–141, 2020.
- Y. Xiao and H. Zhu, “A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing,” Journal of Mathematical Analysis and Applications, vol. 405, no. 1, pp. 310–319, 2013.
- Z. Papp and S. Rapajić, “FR type methods for systems of large-scale nonlinear monotone equations,” Applied Mathematics and Computation, vol. 269, pp. 816–823, 2015.
- W. Zhou and F. Wang, “A PRP-based residual method for large-scale monotone nonlinear equations,” Applied Mathematics and Computation, vol. 261, pp. 1–7, 2015.
- G. Yuan and M. Zhang, “A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 286, pp. 186–195, 2015.
- G. Yuan, Z. Meng, and Y. Li, “A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations,” Journal of Optimization Theory and Applications, vol. 168, no. 1, pp. 129–152, 2016.
- J. Fan and J. Pan, “Convergence properties of a self-adaptive Levenberg-Marquardt algorithm under local error bound condition,” Computational Optimization and Applications, vol. 34, no. 1, pp. 47–62, 2006.
- J. Fan, “Accelerating the modified Levenberg-Marquardt method for nonlinear equations,” Mathematics of Computation, vol. 83, no. 287, pp. 1173–1187, 2013.
- X. Yang, “A higher-order Levenberg-Marquardt method for nonlinear equations,” Applied Mathematics and Computation, vol. 219, no. 22, pp. 10682–10694, 2013.
- K. Amini and F. Rostami, “A modified two steps Levenberg-Marquardt method for nonlinear equations,” Journal of Computational and Applied Mathematics, vol. 288, pp. 341–350, 2015.
- J.-l. Zhang and Y. Wang, “A new trust region method for nonlinear equations,” Mathematical Methods of Operations Research (ZOR), vol. 58, no. 2, pp. 283–298, 2003.
- J. Fan, “Convergence rate of the trust region method for nonlinear equations under local error bound condition,” Computational Optimization and Applications, vol. 34, no. 2, pp. 215–227, 2006.
- J. Fan and J. Pan, “An improved trust region algorithm for nonlinear equations,” Computational Optimization and Applications, vol. 48, no. 1, pp. 59–70, 2011.
- G. Yuan, Z. Wei, and X. Lu, “A BFGS trust-region method for nonlinear equations,” Computing, vol. 92, no. 4, pp. 317–333, 2011.
- H. Esmaeili and M. Kimiaei, “A new adaptive trust-region method for system of nonlinear equations,” Applied Mathematical Modelling, vol. 38, no. 11-12, pp. 3003–3015, 2014.
- H. Esmaeili and M. Kimiaei, “An efficient adaptive trust-region method for systems of nonlinear equations,” International Journal of Computer Mathematics, vol. 92, no. 1, pp. 151–166, 2015.
- M. Kimiaei and H. Esmaeili, “A trust-region approach with novel filter adaptive radius for system of nonlinear equations,” Numerical Algorithms, vol. 73, no. 4, pp. 999–1016, 2016.
- H. Esmaeili and M. Kimiaei, “A trust-region method with improved adaptive radius for systems of nonlinear equations,” Mathematical Methods of Operations Research, vol. 83, no. 1, pp. 109–125, 2016.
- M. J. D. Powell, “Convergence properties of a class of minimization algorithms,” in Nonlinear Programming, pp. 1–27, Elsevier, Amsterdam, Netherlands, 1975.
- L. Qi, X. J. Tong, and D. H. Li, “Active-set projected trust-region algorithm for box-constrained nonsmooth equations,” Journal of Optimization Theory and Applications, vol. 120, no. 3, pp. 601–625, 2004.
- G. Yuan, Z. Wei, and Z. Wang, “Gradient trust region algorithm with limited memory BFGS update for nonsmooth convex minimization,” Computational Optimization and Applications, vol. 54, no. 1, pp. 45–64, 2013.
- Y. X. Yuan and W. Sun, Optimization Theory and Methods, Science Press, Beijing, China, 1997.
- A. R. Conn, N. I. M. Gould, and P. L. Toint, Trust-region Methods, Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2000.
- T. Steihaug, “The conjugate gradient method and trust regions in large scale optimization,” SIAM Journal on Numerical Analysis, vol. 20, no. 3, pp. 626–637, 1983.
- W. La Cruz, J. M. Martínez, and M. Raydan, “Spectral residual method without gradient information for solving large-scale nonlinear systems of equations,” Mathematics of Computation, vol. 75, no. 255, pp. 1429–1449, 2006.
- E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” Mathematical Programming, vol. 91, no. 2, pp. 201–213, 2002.
Copyright © 2020 Zhou Sheng and Dan Luo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.