Advances in Operations Research

Advances in Operations Research / 2009 / Article

Research Article | Open Access

Volume 2009 |Article ID 909753 |

Gonglin Yuan, Shide Meng, Zengxin Wei, "A Trust-Region-Based BFGS Method with Line Search Technique for Symmetric Nonlinear Equations", Advances in Operations Research, vol. 2009, Article ID 909753, 22 pages, 2009.

A Trust-Region-Based BFGS Method with Line Search Technique for Symmetric Nonlinear Equations

Academic Editor: Khosrow Moshirvaziri
Received29 Apr 2009
Revised19 Aug 2009
Accepted28 Oct 2009
Published30 Nov 2009


A trust-region-based BFGS method is proposed for solving symmetric nonlinear equations. In this given algorithm, if the trial step is unsuccessful, the linesearch technique will be used instead of repeatedly solving the subproblem of the normal trust-region method. We establish the global and superlinear convergence of the method under suitable conditions. Numerical results show that the given method is competitive to the normal trust region method.

1. Introduction

Consider the following system of nonlinear equations:

where is continuously differentiable, and the Jacobian of is symmetric for all Let be the norm function defined by Then the nonlinear equations (1.1) is equivalent to the following global optimization problem:

There are two ways for nonlinear equations by numerical methods. One is the line search method and the other is the trust region method. For the line search method, the following iterative formula is often used to solve (1.1):

where is the th iteration point, is a steplength, and is search direction. To begin, we briefly review some methods for (1.1) by line search technique. First, we give some techniques for Brown and Saad [1] proposed the following line search method to obtain the stepsize

where Based on this technique, Zhu [2] gave the nonmonotone line search technique:

and and is a nonnegative integer. From these two techniques (1.4) and (1.5), it is easy to see that the Jacobian matrix must be computed at every iteration, which will increase the workload especially for large-scale problems or this matrix is expensive to calculate. Considering these points, we [3] presented a new backtracking inexact technique to obtain the stepsize :

where and is a solution of the system of linear (1.15). We established the global convergence and the superlinear convergence of this method. The numerical results showed that the new line search technique is more effective than the normal methods. Li and Fukashima [4] proposed an approximate monotone line search technique to obtain the step-size satisfying

where and are positive constants, is the smallest nonnegative integer such that (1.7), and satisfies

Combining the line search (1.7) with one special BFGS update formula, they got some better results (see [4]). Inspired by their idea, Wei [5] and Yuan [68] presented several approximate methods. Further work can be found in [9].

Second, we present some techniques for One of the most effective methods is Newton method. It normally requires a fewest number of function evaluations, and it is very good at handling ill-conditioning. However, its efficiency largely depends on the possibility of solving a linear system efficiently which arises when computing the search in each iteration:

Moreover, the exact solution of the system (1.9) could be too burdensome, or it is not necessary when is far from a solution [10]. Inexact Newton methods [2, 3, 10] represent the basic approach underlying most of the Newton-type large-scale algorithms. At each iteration, the current estimate of the solution is updated by approximately solving the linear system (1.9) using an iterative algorithm. The inner iteration is typically “truncated” before the solution to the linear system is obtained. Griewank [11] firstly proposed the Broyden's rank one method for nonlinear equations and obtained the global convergence. At present, a lot of algorithms have been proposed for solving these two problems (1.1) and (1.2)(see [1222] etc.).

Trust region method is a kind of important and efficient methods in the area of nonlinear optimization. This method can be traced back to the works of Levenberg [17] and Marquardt [18] on nonlinear least-squares problems and the work of Goldfeld et al. [23] for unconstrained optimization. Powell [24] was the first to establish the convergence result of trust region method for unconstrained optimization. Fletcher [25, 26] firstly proposed trust region algorithms for linearly constrained optimization problems and nonsmooth optimization problems, respectively. This method has been studied by many authors [15, 2731] and has been applied to equality constrained problems [3234]. Byrd et al. [35], Fan [36], Powell and Yuan [37], Vardi [38], Yuan [39, 40], Yuan et al. [41], and Zhang and Zhu [42] proposed various trust region algorithms for constrained optimization problems and established the convergence. Fan [36], Yuan [39], and Zhang [43] presented the trust region algorithms for nonlinear equations and got some results.

The normal trust-region subproblem for nonlinear equations is to find the trial step such that

where is a scalar called the trust region radium. Define the predicted descent of the objective function at th iteration by

the actual descent of by

and the ratio of actual descent to predicted descent:

For the normal trust region algorithm, if ( this case is called a successful iteration), the next iteration is and go to the next step; otherwise reduce the trust region radium and solve this subproblem (1.10) repeatedly. Sometimes, we must do this work many times and compute the Jacobian matrix and at every time, which obviously increases the work time and workload, especially for large-scale problems. Even more detrimental, the trust region subproblem is not very easy (see [36, 39] etc.) to be solved for most of the practical problems.

In order to alleviate the above bad situation that traditional algorithms have to compute Jacobian matrix and at each and every iteration while repeatedly resolving the trust region subproblem, in this paper, we would like to rewrite the following trust-region subproblem as

where matrix is the approximation to the Jacobian matrix of at . Due to the boundness of the region , (1.14) has a solution regardless of definiteness (see [43]). This implies that it is valid to adopt a BFGS update formula to generate for trust region methods and the BFGS update is presented as follows:

where Define the predicted descent of the objective function at th iteration by

the actual descent of by

and the ratio of actual descent to predicted descent:

If ( called a successful iteration ), the next iteration is Otherwise, we use a search technique to obtain the steplength and let the next iteration be Motivated by the idea of the paper [4], we propose the following linesearch technique to obtain :

where , , and are some positive constants. In Section 3, we will show (1.19) is well-defined. Here and throughout this paper, denotes the Euclidian norm of vectors or its induced matrix norm. is replaced by

In the next section, the proposed algorithm for solving (1.1) is given. The global and superlinear convergence of the presented algorithm are stated in Section 3 and Section 4, respectively. The numerical results of the method are reported in Section 5.

2. Algorithms

Algorithm 2.1. Initial: choose , , , , . Let ;Step 1: Let ;Step 2: If stop. Otherwise go to Step 3;Step 3: Solve the subproblem (1.14) with to get ;Step 4: If Go to Step 5; Otherwise Let and go to Step 6; Step 5: Let be the smallest nonnegative integer such that (1.19) holds for . Let and , ;Step 6: Update to get by (1.15). Let . Go to Step 2. Here we also give a normal trust-region method for (1.1) and call it Algorithm 2.2.

Algorithm 2.2 (the normal Trust-Region Algorithm [44]). Initial: Given a starting point is the initial trust region radium, an upper bound of trust region radius Set .Step 1: If stop. Otherwise, go to Step 2.Step 2: Solve the trust-region subproblem (1.10) to obtain .Step 3: Let if set If and let Otherwise, let Step 4: If let and go to Step 5; otherwise, let go to Step 2.Step 5: Set Go to Step 1.

Remark 2.3. By we have the following approximate relations: Since satisfies the secant equation and is symmetric, we have approximately This means that approximates along direction

3. The Global Convergence

In this section, we will establish the global convergence of Algorithm 2.1. Let be the level set defined by

which is bounded.

Assumption 1. (A) is continuously differentiable on an open convex set containing .
(B) The Jaconbian of is symmetric and bounded on and there exists a positive constant such that
(C) is positive definite on ; that is, there is a constant such that
(D) is differentiable and its gradient satisfies where is the Lipschitz constant. By Assumptions 1(A) and 1(B), it is not difficult to get the following inequality: According to Assumptions 1(A) and 1(C), we have where which means that the update matrix is always positive definite. By (3.5) and (3.6), we have

Lemma 3.1 ([see Theorem 2.1 in [45]]). Suppose that Assumption 1 holds. Let be updated by BFGS formula (1.15) and let be symmetric and positive definite. For any and satisfy (3.7). Then there exist positive constants and such that, for any positive integer hold for at least value of

Considering the subproblem (1.14), we give the following assumption similar to (1.14). Similar to [2], the following assumption is needed.

Assumption 2. is a good approximation to that is, and satisfies where is a small quantity, and

Lemma 3.2. Let Assumption 2 hold. Then is descent direction for at that is,

Proof. Let be the residual associated with so that : So we have Therefore, taking the norm in the right-hand side of the above equality, we have that from Assumption 2 Hence, for the lemma is satisfied.

According to the above lemma, it is easy to deduce that the norm function is descent, which means that is true.

Lemma 3.3. Let be generated by Algorithm 2.1 and suppose that Assumption 2 holds. Then . Moreover, converges.

Proof. By Lemma 3.2, we have . Then we conclude from Lemma 3.3 in [46] that converges. Moreover, we have for all This implies that .

Lemma 3.4. Let Assumption 1 hold. Then the following inequalities hold.

Proof. Since the update matrix is positive definite. Then, problem (1.14) has a unique solution , which together with some multiplier satisfies the following equations: From (3.18), we can obtain By (3.19) and (3.8), we get (3.16), which also imply that the inequality (3.17) holds.

The next lemma will show that (1.19) is reasonable, and then Algorithm 2.1 is well defined.

Lemma 3.5. Let Assumptions 1(D) and 2 hold. Then there exists a step-size such that (1.19) in a finite number of backtracking steps.

Proof. From Lemma 3.8 in [1] we have that in a finite number of backtracking steps, must satisfy By (3.12) and (3.14), let and we have where the last inequality follows (3.16) and (3.17). By let then we obtain (1.19). The proof is complete.

Lemma 3.6. Let be generated by the Algorithm 2.1. Suppose that Assumptions 1 and 2 hold. Then one has In particular, one has

Proof. By (3.8) and (3.19), we have From Step 4 of Algorithm 2.1, if is true, we get otherwise, if is true, by Step 5 of Algorithm 2.1, (3.8), and (3.26), we can obtain By Lemma 3.5, we know that (1.19) can be satisfied in a finite number of backtracking steps, which means that there exists a constant satisfying for all By (3.26) and (3.27), we have where According to (3.28), we get and by Lemma 3.3, we know that is convergent. Therefore, we deduce that (3.23) holds. According to (3.23), it is easy to deduce (3.24). The proof is complete.

Lemma 3.7. Suppose that Assumptions 1 and 2 hold. There are positive constants such that for any , if , then the following inequalities hold:

Proof. We will prove this lemma in the following two cases.Case 1 (). By (3.18), we have and . Together with (3.8) and (3.19), we get Then (3.30) holds with and .Case 2 (). From (3.19) and (3.8), we have Then, we get By (3.10) and (3.8), it is easy to deduce that So we obtain Using (3.20), we have Therefore, (3.30) holds. The proof is complete.

In the next theorem, we establish the global convergence of Algorithm 2.1.

Theorem 3.8. Let be generated by Algorithm 2.1 and the conditions in Assumptions 1 and 2 hold. Then one has

Proof. By Lemma 3.6, we have Combining (3.8) and (3.36), we get Together with (3.30), we obtain (3.35). The proof is complete.

4. The Superlinear Convergence Analysis

In this section, we will present the superlinear convergence of Algorithm 2.1.

Assumption 3. is Hölder continuous at ; that is, for every in a neighborhood of there are positive constants and such that where stands for the unique solution of (1.1) in

Lemma 4.1. Let be generated by Algorithm 2.1 and the conditions in Assumptions 1 and 2 hold. Then, for any fixed , one has Moreover, one has where .

Proof. Using Assumption 1, we can have the following inequality: By (3.8) and (3.30), we have Together with (3.28), we get and let Suppose that there exists a positive integer as (3.8) holds. Then we obtain where This together with (4.4) shows that holds for all large enough. Therefore, for any we have (4.2). Notice that from (4.2), we can get (4.3).

Lemma 4.2. Let Assumptions 1, 2, and 3 hold. Then, for all sufficiently large, there exists a positive constant such that where .

Proof. From Theorem 3.8 and (4.4), it is not difficult to get Then (4.1) holds for all large enough. Using the mean value theorem, for all sufficiently large, we have where . Therefore, the inequality of (4.8) holds.

Lemma 4.3. Let Assumptions 1, 2, and 3 hold and let {} be generated by Algorithm 2.1. Denote , . Then, for all large , there are positive constants , and such that where , is the Frobenius norm of a matrix and is defined as follows: In particular, and are bounded.

Proof. From (1.15), we have where the last inequality follows the inequality (49) of [47]. Hence, (4.10) holds. By (4.8), in a way similar to that of [46], we can prove that (4.11) holds and and are bounded. The proof is complete.

Lemma 4.4. Let be generated by Algorithm 2.1 and the conditions in Assumptions 1, 2 and 3 hold. Then where

Proof. In a similar way to [46], it is not difficult to obtain On the other hand, we have where the last inequality follows from (4.8). We know that and are bounded, and is positive definite. By (3.5), we get Combining (4.15) and (4.17), we conclude that (4.14) holds. The proof is complete.

Theorem 4.5. Let the conditions in Assumptions 1, 2 and 3 hold. If in (3.10). Then the sequence generated by Algorithm 2.1 converges to superlinearly for .

Proof. For all we get where the last inequality follows (3.10). By (3.5), we have Dividing both sides by we get Substituting this into (4.18), we can obtain which means that Since and as by (4.14) and (3.10), we have Using (3.16), we get Considering (4.4), we have Therefore, we get the result of the superlinear convergence.

5. Numerical Results

In this section, we test the proposed BFGS trust-region method on symmetric nonlinear equations and compare it with Algorithm 2.2. The following problems with various sizes will be solved.

Problem 1. The discretized two-point boundary value problem like the problem in [48] is where is the tridiagonal matrix given by and with

Problem 2. Unconstrained optimization problem is with Engval function [49] defined by The related symmetric nonlinear equation is where with In the experiments, the parameters in Algorithm 2.1 were chosen as , , , and We obtain from subproblem (1.14) by the well-known Dogleg method. The parameters in Algorithm 2.2 were chosen as and Since the matrices will be singular, we solve (1.10) by to obtain The program was coded in We stopped the iteration when the condition was satisfied. If the iteration number is larger than one thousand, we also stop this program and this method is considered to be failed. For Algorithm 2.1, Tables 1(a) and 1(b) and Tables 2(a) and 2(b) show the performance of the method need to solve Problem 1 and Problem 2, respectively. For Algorithm 2.2, Tables 1(c) and 1(d) and Tables 2(c) and 2(d) show the performance of the normal trust region method need to solve Problem 1 and Problem 2, respectively. The columns of the tables have the following meaning:Dim: the dimension of the problem,NI: the total number of iterations,NG: the number of the function evaluations,EG: the norm of the function evaluations.

(a) (Small-scales). Test results for Algorithm 2.1.

x0 (1, ,1) (60, ,60) (600, ,600)

13/24/2.406624e-07 14/25/2.272840e-07 17/30/3.104130e-0713/24/2.449361e-0714/25/2.398188e-0717/30/4.593832e-07
48/101/2.189696e-0749/102/4.009098e-07 50/103/2.147571e-0748/101/2.181267e-07 49/102/4.008911e-07 50/103/2.120250e-07
82/171/6.794811e-07 89/188/6.345939e-0791/190/7.804790e-07 82/171/8.358725e-07 89/188/6.367964e-07 91/190/7.801889e-07

x0 (1,0,1,0, ) (60,0,60,0, ) (600,0,600,0, )

21/42/7.364467e-0722/43/3.922363e-07 22/45/4.894966e-07 21/44/3.463471e-0822/43/3.860638e-07 22/45/4.895404e-07
72/153/9.350290e-07 86/181/4.420131e-07 88/185/7.620218e-07 70/151/6.776281e-07 86/181/4.420083e-0749/83/8.003368e-07
73/156/9.013346e-07 88/185/7.631881e-0788/191/6.856481e-07 74/161/9.918464e-07 88/185/7.368909e-07 88/191/6.856897e-07

(b) (Large-scales). Test results for Algorithm 2.1.

x0 (1, ,1) (60, ,60) (600, ,600)

83/178/9.096568e-7 106/225/9.483206e-7 117/250/8.796828e-7 85/180/7.376219e-7106/225/9.263058e-7 117/250/8.779599e-7
85/180/8.830573e-7 103/218/9.825658e-7 115/244/9.765194e-7 83/178/7.659650e-7 103/218/9.796118e-7 115/244/9.755827e-7
76/165/8.611337e-7 96/207/8.301215e-7 105/224/9.957816e-7 76/165/8.587066e-7 96/207/8.291876e-7 105/224/9.925005e-7

x0 (1,0,1,0, ) (60,0,60,0, ) (600,0,600,0, )

68/149/8.780047e-7 91/194/7.484521e-7 101/216/9.790557e-7 69/150/9.770900e-7 91/194/7.275693e-7 101/216/9.559911e-7
72/155/9.797645e-07 96/205/9.993161e-7 106/225/8.916405e-7 72/155/9.886969e-7 97/206/7.492841e-7 106/225/8.921008e-7
69/152/9.919863e-7 93/200/6.930976e-7 106/227/8.119328e-7 69/152/9.948500e-7 93/200/6.946308e-7 106/227/8.123102e-7

(c) (Small-scales). Test results for Algorithm 2.2.

x0 (1, ,1) (60, ,60) (600, ,600)

54/107/8.039519e-767/133/7.624248e-7 74/147/8.167466e-754/107/8.061366e-7 67/133/7.624560e-7 74/147/8.167469e-7
60/119/7.614838e-773/145/8.350445e-7 80/159/9.679851e-760/119/7.615091e-773/145/8.350450e-7 80/159/9.679851e-7

x0 (1,0,1,0, ) (60,0,60,0, ) (600,0,600,0, )

52/103/7.605486e-7 64/127/9.929883e-772/143/7.732628e-752/103/7.646868e-7 64/127/9.930747e-772/143/7.732660e-7
56/111/8.896898e-769/137/9.690007e-7 77/153/8.223484e-756/111/8.899175e-769/137/9.690048e-7 77/153/8.223488e-7
57/113/9.598124e-7 71/141/7.734909e-778/155/8.965851e-757/113/9.598763e-7 71/141/7.734918e-778/155/8.965852e-7

(d) (Large-scales). Test results for Algorithm 2.2.

x0 (1, ,1) (60, ,60) (600, ,600)

61/121/8.110467e-774/147/8.917908e-782/163/7.610549e-761/121/8.110534e-7 74/147/8.917909e-782/163/7.610549e-7
62/123/9.526492e-776/151/7.712044e-783/165/8.958279e-7 62/123/9.526504e-7 76/151/7.712044e-783/165/8.958279e-7
63/125/9.938699e-7 77/153/8.049274e-784/167/9.351920e-7 63/125/9.938703e-7 77/153/8.049274e-784/167/9.351920e-7

x0 (1,0,1,0, ) (60,0,60,0, ) (600,0,600,0, )

59/117/7.503172e-7 72/143/8.249912e-779/157/9.576414e-759/117/7.503296e-772/143/8.249914e-7 79/157/9.576414e-7
60/119/8.811245e-7 73/145/9.701366e-781/161/8.285552e-760/119/8.811269e-7 73/145/9.701367e-781/161/8.285552e-7
61/121/9.191890e-775/149/7.444393e-782/163/8.649128e-761/121/9.191896e-7 75/149/7.444393e-782/163/8.649128e-7

(a) (Small-scales). Test results for Algorithm 2.1.

x0 (0.5, ,0.5) (1, ,1) (3, ,3)

25/44/9.720971e-07 21/32/4.889567e-07 92/103/2.475812e-08 21/32/8.691255e-07 46/59/5.860956e-07 86/105/6.348374e-08
37/56/9.950345e-07 39/56/8.776379e-07 113/139/9.587026e-07 40/63/6.984106e-07 69/96/9.523480e-07 103/125/9.404211e-07
42/59/9.725361e-0741/60/7.374460e-07 113/135/7.909796e-07 40/55/8.380367e-07 117/489/9.805302e-07 97/129/7.975248e-07

x0 (0.5,0,0.5,0, ) (1,0,1,0, ) (3,0,3,0, )

24/35/4.711749e-07 21/30/3.147507e-07 44/65/3.529113e-07 27/48/4.004367e-07 39/76/8.503415e-07 29/42/7.623619e-07
36/57/8.776354e-07 36/57/8.287552e-07 54/77/8.491652e-07 41/64/9.492805e-07 42/69/9.029472e-07 58/77/9.752703e-07