#### Abstract

A new trust region method is presented, which combines nonmonotone line search technique, a self-adaptive update rule for the trust region radius, and the weighting technique for the ratio between the actual reduction and the predicted reduction. Under reasonable assumptions, the global convergence of the method is established for unconstrained nonconvex optimization. Numerical results show that the new method is efficient and robust for solving unconstrained optimization problems.

#### 1. Introduction

Consider the following unconstrained optimization problem:where is continuously differentiable.

The trust region methods calculate a trial step by solving the subproblem at each iteration,where and is symmetric matrix approximating the Hessian of at and is a trust region radius at . Throughout this paper, denotes the Euclidean norm on . Define the ratio and the numerator and the denominator are called the actual reduction and the predicted reduction, respectively.

The nonmonotone line search technique is firstly proposed by Grippo et al. [1] in line search framework for Newton’s method. At each iteration, the selected function value is taken aswhere is a positive integer. Although many algorithms based on (4) work well in many cases, a “good” function value generated as the iteration process is not selected because of the max function in (4), and the choice of is sensitive to some numerical tests sometimes. To overcome shortages, Zhang and Hager [2] proposed a new nonmonotone line search technique and they used to replace the function in (4), wherewhere , and. Numerical tests have shown that the new nonmonotone algorithm is more effective.

Many researchers proposed many trust region methods by considering the ratio and the updating trust region radius to solve effectively unconstrained optimization problem. Dai and Xu [3] proposed the following weighting formula:where is some positive integer and is the weight of , such that Many self-adaptive adjustment strategies are developed to update the trust region radius, such as [4–14]. In addition, many adaptive nonmonotonic trust region methods have been proposed in literatures [15–21].

In this paper, we propose a new self-adaptive weighting trust region method based on the nonmonotone technique (4) in [2], the weighting technique (5) in [3], and -function in [6].

The rest of the paper is organized as follows. In Section 2, we define -function to introduce a new update rule and a new nonmonotone self-adaptive trust region algorithm is presented. In Section 3, the convergence properties of the proposed algorithm are investigated. In Section 4, numerical results are given. In Section 5 conclusions are summarized.

#### 2. -Function and the New Nonmonotone Self-Adaptive Trust Region Algorithm

To obtain the new trust region radius update rules, we recall -function .

*Definition 1 (see [6]). *A function is called an -function if it satisfies the following.(1) is nondecreasing in and nonincreasing in , , for ,(2),(3),(4),(5),(6), for ,(7),where the constants , , , , , , and are positive constants such that

Now we describe the new nonmonotone self-adaptive trust region algorithm.

*Algorithm 2. * *Step 1*. Given ,,, ,, and ; is a given small positive integer; ;, ; and set . *Step 2*. If , stop. *Step 3*. Solve subproblem (2) to get . *Step 4*. Compute and by (6). If , go to Step 5. Else go to Step 6.*Step 5*. Choose some -function and compute , and then go to Step . *Step 6*. Set Update the trust region radius

*Step 7*. Compute and ; choose , set , and go to Step .

#### 3. Convergence of Algorithm 2

In the section, we consider the convergence properties of Algorithm 2. We give the following assumption.

*Assumption 3. *(i) The function is bounded on the level set and twice continuously differentiable. (ii) The sequence is uniformly bounded in norm; that is, for some constant , . (iii) The solution of subproblem (2) satisfieswhere .

Lemma 4. *Suppose that (i) and (ii) in Assumption 3 hold. Thenwhere arbitrarily decreases with decreasing.*

*Proof. *Since, from Taylor theorem, we have it follows from the definition of in (2) thatwhere arbitrarily decreases with decreasing.

Lemma 5. *Assume that the sequence is generated by Algorithm 2. Then the sequence *

*Proof. *From Lemma in [22] and in Assumption 3 (i), we have .

The next lemma shows that the loop through Step to Step cannot cycle infinitely and the sequence is well defined.

Lemma 6. *Suppose that Assumption 3 holds. Assume also , and there exists a sufficiently small constant such thatThenholds.*

*Proof. *By Assumption 3 and , there exist and a positive index such thatCombining (11), we have that, for ,Combining (12) and (18), we haveBy (15), we can choose sufficiently small such that and furthermore, for sufficiently large , For the above , we know thatFrom (6) and (22), for sufficiently large , we know that formulate always the formFrom (23), for sufficient large , we have . By Algorithm 2 and the definition of -function, we have , where falls below .

We will show the global convergence of Algorithm 2.

Theorem 7. *Suppose that Assumption 3 holds. Let the sequence be generated by Algorithm 2. Then*

*Proof. *For the purpose of deriving a contradiction, suppose that there exists a positive constant such thatFor convenience, we denote one index set as follows: First, assume that the set has infinite elements. That is, for any , holds. For any , using Algorithm 2 and (12), we have thatThus, from (27),From (5) and (28), we have that From Lemma in [22] and Assumption 3 (i), we know the sequence is nonincreasing and convergent. Then which contradicts (16). Next, we assume that the set has finite elements. Then, for sufficient large , we have that . From the definition of -function and Steps and in Algorithm 2, we have that the trust region is decreasing as the iteration process. Furthermore, the limit holds, which gives a contradiction to (16). The proof is completed.

#### 4. Numerical Experiments

In this section, we present preliminary numerical results to illustrate the performance of Algorithm 2, denoted by NTRW. In Algorithm 2, we choosewhere , and. In the framework of Algorithm 2, we compare NTRW with the following algorithms: the basic trust region method, denoted by BTR; the basic trust region method with Grippo’s nonmonotone technique, denoted by NTR1, ; the basic trust region method with Hager’s nonmonotone technique, denoted by NTR2, . All tests are implemented by using Matlab R2008a on a PC with CPU 2.40 GHz and 2.00 GB RAM. The test problem collections for unconstrained minimization in Table 1 are taken from Moré et al. in [23], the CUTEr collection [24, 25].

In all algorithms in this paper, the matrix is updated by BFGS formula [26, 27]. The trial step , for small-scale problems, is computed by trust m file in Optimization Toolbox of Matlab, for middle-scale and large-scale problems, and is computed by CG-Steihaug algorithm in [26]. The iteration is terminated by the following condition: where . In Tables 1, 2, 3, and 4, we give the dimension (Dim) of each test problem (P), the number of iterations, the number of function evaluations, and the CPU (*cpu*) time for solving the test problem.

In Table 2, we compare 43 small-scale problems for the four algorithms, and the results are concluded as follows:(i)19 problems where NTRW was superior to BTR,(ii)11 problems where BTR was superior to NTRW,(iii)13 problems where NTR2 was superior to NTR1,(iv)5 problems where NTR1 was superior to NTR2,(v)25 problems where NTRW was superior to NTR2,(vi)10 problems where NTR2 was superior to NTRW. For problems 12, 18, 24, 30, and 36 especially, the iterations of four algorithms are similar while of NTRW are much less than the others. It means that the number of subproblem evaluations of NTRW is much less than the others. Therefore, our self-adaptive technique is efficient. For problems 10, 20, 27, 32, and 38, NTRW is superior to the others clearly. And of NTRW is less than the others. So the performance of our algorithm is better than the others.

In Table 3, we compare 25 middle-scale problems of the four algorithms. There are 12 problems that show NTRW is much superior than the others, 7 problems that show the performance of the four algorithms is similar, and only 4 problems that show NTRW is bad.

In Table 4, we compare 10 large-scale problems of the four algorithms. There are 5 problems that show NTRW is much superior than the others, 4 problems that show the performance of the four algorithms is similar, and only 1 problem that shows NTRW is bad. Note that, for problems 33 and 35, the iteration of our algorithm is similar to NTR2, while the CPU time is much more than NTR2. Exponential function called in Matlab environment maybe consume more time, which is contained in -function in (32).

Further result is shown in Figures 1 and 2, which is characterized by means of performance profile proposed in [28]. The performance ratio is the probability for solver for the test problems, where a log-scale ratio is not greater than the factor . More details are founded in [28]. As we can see from Figures 1 and 2, NTRW is obviously superior than NTR1 and NTR2 in the number of iterations and function evaluations. NTRW is superior than the other three algorithms in the number of function evaluations.

#### 5. Conclusion

This paper presents a nonmonotone weighting self-adaptive trust region algorithm for unconstrained nonconvex optimization. The new algorithm is very simple and easily implemented. The convergence properties of the method are established under reasonable assumptions. Numerical experiments show that the new algorithm is quite robust and effective, and the numerical performance is comparable to or better than that of other trust region algorithms in the same frame.

#### Conflict of Interests

The authors declare that they have no conflict of interests.

#### Acknowledgment

This research is partly supported by Chinese NSF under Grant no. 11171003.