#### Abstract

Motivated by the method of Su and Pu (2009), we present an improved nonmonotone filter trust region algorithm for solving nonlinear equality constrained optimization. In our algorithm a modified nonmonotone filter technique is proposed and the restoration phase is not needed. At every iteration, in common with the composite-step SQP methods, the step is viewed as the sum of two distinct components, a quasinormal step and a tangential step. A more relaxed accepted condition for trial step is given and a crucial criterion is weakened. Under some suitable conditions, the global convergence is established. In the end, numerical results show our method is effective.

#### 1. Introduction

We consider the problem of minimizing a nonlinear function subject to a set of nonlinear equality constraints as follows: where the objective function and the equality constraints are all twice continuously differentiable.

The algorithm that we discuss belongs to the class of trust region methods and, more specifically, to that filter methods suggested first by Fletcher and Leyffer [1], in which the use of a penalty function, a common feature of the large majority of the algorithms for constrained optimization, is replaced by the technique so-called “filter.” Subsequently, global convergences of the trust region filter SQP methods were established by Fletcher et al. [2, 3], and Ulbrich [4] got its superlinear local convergence. Especially, the step framework in [3] is similar in spirit to the composite-step SQP methods pioneered by Vardi [5], Byrd et al. [6], and Omojokun [7]. Consequently, filter method has been actually applied in many optimization techniques, for instance, the pattern search method [8], the SLP method [9], the interior method [10], the bundle approaches [11, 12], the system of nonlinear equations and nonlinear least squares [13], multidimensional filter method [14], line search filter methods [15, 16], and so on.

In fact, filter method exhibits a certain degree of nonmonotonicity. The idea of nonmonotone technique can be traced back to Grippo et al. [17] in 1986, combined with the line search strategy. Due to its excellent numerical exhibition, over the last decades, the nonmonotone technique has been used in trust region method to deal with unconstrained and constrained optimization problems [18–25]. More recently, the nonmonotone trust region method without penalty function has also been developed for constrained optimization [26–29]. Especially, in [29] the nonmonotone idea is employed to the filter technique and restoration phase, a common feature of the large majority of filter methods, is not needed.

Based on [29], motivated by above ideas and methods, in this paper we present an improved filter algorithm that combines the nonmonotone and trust region techniques for solving nonlinear equality constrained optimization. Our method improves previous results and gives a more flexible mechanism, weakens some needed conditions, and has the merits as follows.(i)An improved nonnomotone filter technique is presented. Filter technique is viewed as a biobjective optimization problem that minimizes objective function and violation function . In [29], the same nonmonotone measure is utilized for both and at the th iteration. In the paper we improve it and define a new measure for , which may make the discussion and consideration of the nonmonotone properties of and more freely.(ii)The restoration phase is not needed. The restoration procedure has to be considered in most of filter methods. We employ the nonmonotone idea to the filter technique, so as [29] the restoration phase is not needed.(iii)A more relaxed accepted condition for trial step is considered. Compared to general trust region methods, in [29] the condition for the accepted trial step is relaxed. In the paper we improve it and then the accepted condition for trial step is more relaxed.(iv)A crucial criterion is weakened in the algorithm. By introducing a new parameter , we improve the crucial criterion for implementation of the method in [29]. It is obvious that our criterion becomes same with [29] if , but new criterion can be easier to satisfy, so this crucial criterion has been weakened when setting in the initialization of our algorithm.

The presentation is organized as follows. Section 2 introduces some preliminary results and improvements on filter technique and some conditions. Section 3 develops a modified nonmonotone filter trust region algorithm, whose global convergence is shown in Section 4. The results of numerical experience with the proposed method are discussed in the last section.

#### 2. Preliminary and Improvements

In this section, we first recall some definitions and preliminary results about the techniques of composite-step SQP type trust region method and fraction of Cauchy decrease condition. And then the improvements on filter technique and some conditions are given.

##### 2.1. Fraction of Cauchy Decrease Condition

To introduce the corresponding results, we consider the unconstrained optimization problem: , where is continuously differentiable. At iteration point , it obtains a trial step by solving the following quadratic program subproblem: where is a symmetric matrix which is either the Hessian matrix of at or an approximation to it, and is a trust region radius.

To assure the global convergence, the step is only required to satisfy a fraction of Cauchy decrease condition. It means must predict via the quadratic model function at least as much as a fraction of the decreased given by the Cauchy step on , that is, there exists a constant fixed across all iterations, such that where is the steepest descent step for inside the trust region. And we have the following lemma.

Lemma 1. *If the trial step satisfies a fraction of Cauchy decrease condition, then
*

*Proof. * See Powell [30] for the proof.

##### 2.2. The Subproblems of Composite-Step SQP Type Trust Region Method

In the composite-step SQP type trust region methods, at current iterate we obtain the trial step by computing a quasi-normal step and a tangential step . The purpose of the quasi-normal step is to improve feasibility and in the tangential space of the linearized constraints can provide sufficient decrease for a quadratic model of the objective function to improve optimality.

For problem (), is the solution to the subproblem where is a trust region radius, , and . In order to improve the value of the objective function, can be obtained by the following subproblem where and . Then we get the current trial step .

In usual ways that impose a trust region in step-decomposition methods, the quasi-normal step and the tangential step are required to satisfy Here, to simplify the proof, we only impose a trust region on and , which is natural.

##### 2.3. The Improved Accepted Condition for

Borrowed from the usual trust region idea, we also need to define the following predicted reduction for the violation function and the actual reduction

To evaluate the descent properties of the step for the objective function, we use the predicted reduction of and the actual reduction of

In general trust region method, the step will be accepted if where is a fixed constant.

In [29], considering nonmonotone technique, the condition (13) is replaced by where is the relaxed actual reduction of , that is,

But in this paper, we consider more relaxed accepted condition by increasing to and reducing to simultaneously, then the condition (14) is replaced by where is a small positive number.

##### 2.4. The Improved Nonmonotone Filter Technique

In order to obtain next iterate, it needs to determine the step; this procedure that decides which trial step is accepted is “filter method.” For the optimization problem with equality constraints a promising trial step should either reduce the constraint violation or the objective function value . Since , it is easy to see that if and only if is a feasible point. So in the traditional filter method, a point is called acceptance to the filter if and only if where , denotes the filter set.

Different from above criteria of filter idea, with nonmonotone technique, in [29] a point is called to be acceptable to the filter if and only if where for , and , is a given positive integer, , and there exists a positive constant such that .

Observe that is used in both conditions (19), while in this paper we wish to reduce the relationship of the nonmonotone properties of and . We consider the nonmonotone properties of and , respectively, and call a trial point is acceptable to the filter if and only if where comes from such that and is a subsequence of , and .

Similar to the traditional filter methods, if (20) is satisfied, it is called an -type iteration and the filter set needs to be updated at each successful -type iteration.

##### 2.5. The Weakened Criterion for Implementation of Algorithm

We replace the crucial criterion in [29] by where . It is obvious that this new criterion becomes the same with [29] if , but it can be easier to satisfy, so the criterion has been weakened when setting in the initialization of our algorithm.

#### 3. Description of the Algorithm

It will be convenient to introduce the reduced gradient where and denotes a matrix whose columns are from a basis of the null space of . The first order necessary optimality conditions (Karush-Kuhn-Tucher or KKT conditions) at a local solution of () can be written as

For brevity, at the current iterate , we use , , , , , , to denote , , , , , , . We are now in a position to present a formal statement of our algorithm.

*Algorithm 2. **Step 0.* Initialization: choose an initial guess , a symmetric matrix , an initial region radius , , and . Set , , , and . Let , , , and . *Step 1.* Compute , , , , , . If , then stop. *Step 2.* Compute (5) and (7) to obtain and , and set . *Step 3.* If is acceptable to the filter, go to Step 4, otherwise go to Step 5. *Step 4.* If and , then go to Step 5, otherwise go to Step 6. *Step 5. *, go to Step 2. *Step 6. *, . Update to , . If (20) holds, update the filter set; let , and . Let and go to Step 1.

*Remark 3. *At the beginning of each iteration, we always set , which will avoid too small trust region radius.

*Remark 4. * If -type iteration is satisfied, then is generated, so if the number of -type iterations is infinite, we obtain a sequence such that and is a subsequence of . Specially, (20) always holds on the whole obtained sequence .

*Remark 5. * In the above algorithm, let be a positive integer. For each , let satisfying
and we also find satisfy
so the nonmonotonicity is showed as .

#### 4. Global Convergence of Algorithm

In this section, we will prove the global convergence properties of Algorithm 2 under the following assumptions. The objective and the constraint functions , are all twice continuously differentiable on an open set containing . All points that are sampled by algorithm lie in a nonempty closed and bounded set . The matrix sequence is uniformly bounded. , , and are uniformly bounded on .

From the above assumptions, it is easier to suppose that there exist five constants such that , , , , , , , , ; this can make our following results convenient to prove.

Lemma 6. *At the current iterate , let the trial point component actually be normal to the tangential space. Under the above assumptions, there exists a constant independent of the iterates such that
*

*Proof. * Since is actually normal to the tangential space, it follows that
together with the fact that , we have

Lemma 7. *Suppose that the assumptions hold, there exist positive , independent of the iterates such that
*

*Proof. * The statement is established by an application of Lemma 1 to two subproblems (5) and (7).

Lemma 8. *Under problem assumptions, Algorithm 2 is well defined. *

* Proof. *We will show that there exists such that step is accepted whenever . In fact,
where , denotes some point on the line segment from to , , , and . We consider two cases in the following.*Case 1* . To prove the implementation of Algorithm 2, we need only to show that if , it follows . Without loss of generality, we can assume that . Then we start with such that the closed -ball about lies in . It is obvious from that . And from (28), we have . Observe that
Hence we have
which implies , then for some .*Case 2* . If , then is a KKT-point of (1) by Algorithm 2. Thus, we assume that there exists a constant such that . Then
Next, if we reduce such that , then for all , we have . Therefore
And it holds from that , then, combining with (29), we find that
Hence,
which also implies for some . Thus, the trial step is accepted for all .

Lemma 8 has provided the implementation of Algorithm 2. By mechanism of Algorithm 2, it is obvious that there exists a constant , such that for sufficiently large .

In the remainder of this paper we denote the set of indices of those iterations in which the filter has been augmented by , that is to say, if the iteration is -type iteration, we have . By this definition, if (20) holds for infinite iterations, then ; otherwise, we can say .

Lemma 9. *Let be an infinite sequence generated by Algorithm 2. If , then . *

*Proof. *By mechanism of Algorithm 2, we can assume that for all , it follows
Then we first show that for all , it holds
Next we prove (38) by induction.

If , we have .

Assume that (38) holds for , then we consider that (38) holds for in the following two cases.*Case 1.* If , then
*Case 2.* If , let , then
By the fact that , and , we have
Then for all , (38) holds.

Moreover, since is bounded below, let , we can get that
which implies . Hence the result follows.

Lemma 10. *Suppose that the assumptions hold. If Algorithm 2 does not terminate finitely, let be an infinite sequence generated by Algorithm 2; if , then . *

* Proof. * Suppose to contrary that there exist constant and such that for all . Then similar to the proof of Lemma 8, we have
for all . Since , then ; there exists such that
From , we also have , then . Hence

As in the proof of Lemma 8, there exists such that . That is,
In common with the proof of Lemma 9, we have
which implies . It contradicts (45). Hence the result follows.

Lemma 11. *Let be an infinite sequence generated by Algorithm 2. If , then . *

* Proof. * In view of convenience, denote
where . From the algorithm, we know and it holds

Since , we have
This implies that converges. Moreover, by , we obtain
And since , eventually .

Thus
holds by Algorithm 2, then, with the facts and , we have .

Lemma 12. *Suppose that the assumptions hold. If Algorithm 2 does not terminate finitely, let be an infinite sequence generated by Algorithm 2; if , then . *

*Proof. *Suppose to the contrary that there exist constant and such that for all and . Then similar to the proof of Lemma 8, we have
for all and . Since , that is, for , then there exists such that
From , we also have for , then for . Thus, for

Similar to the proof of Lemma 10, it is obtained that
which implies for all . But for is also true, which contradicts (55). Hence the result follows.

Theorem 13. *Suppose is an infinite sequence generated by Algorithm 2, one has
**
Namely, there exists at least one cluster point of that is a KKT point of problem ().*

*Proof. *The conclusion follows immediately by Lemmas 9, 10, 11, and 12. Thus, the whole proof is completed.

#### 5. Numerical Experiments

In this section, we carry out some numerical experiments based on the algorithm over a set of problems from [31, 32]. The program is written in MATLAB and we test and give the performance of Algorithm 2 on these problems. For each test example, we choose the initial matrix . As to those constants, we set , , , , , , , , , , . We use as the stopping criterion.

During the numerical experiments, updating of is done by where , , and

In the following tables, the notations mean as follows:(i)SC: the problems from Schittkowski [32]; (ii)HS: the problems from Hock and Schittkowski [31];(iii)*n*: the number of variables; (iv)*m*: the number of inequality constraints;(v)NIT: the number of iterations;(vi)NF: the number of evaluations of the objective functions;(vii)NG: the number of evaluations of scalar constraint functions; (viii)Algorithm 1: our method in this paper; (ix)Algorithm 2: the method proposed in [29];(x)Algorithm 3: the method proposed in [33].

The detailed numerical results are summarized in Tables 1 and 2. Now we give a brief analysis for numerical test results. From Table 1, we can see that our algorithm executes well for these typical problems taken from [31, 32]. From the computation efficiency in Table 2, by solving same problems in [31, 32] we should point out our algorithm is competitive with the some existed nonmonotone filter type methods to solve equality constrained optimization, for example, [29, 33]. However, in our method an improved nonmonotone filter technique is proposed, which may make the discussion and consideration of the nonmonotone properties of and more freely. Furthermore, we consider a more relaxed accepted condition for trial step and a weakened crucial criterion is presented. It is easier to meet those new criteria, so our method is much more flexible. All results summarized show that our algorithm is promising and numerically effective.

#### Acknowledgments

The author would like to thank the anonymous referees for the careful reading and helpful comments, which led to an improved version of the original paper. The research is supported by the Science & Technology Program of Shanghai Maritime University (no. 20120060).