- About this Journal
- Abstracting and Indexing
- Aims and Scope
- Annual Issues
- Article Processing Charges
- Articles in Press
- Author Guidelines
- Bibliographic Information
- Citations to this Journal
- Contact Information
- Editorial Board
- Editorial Workflow
- Free eTOC Alerts
- Publication Ethics
- Reviewers Acknowledgment
- Submit a Manuscript
- Subscription Information
- Table of Contents
Abstract and Applied Analysis
Volume 2013 (2013), Article ID 163487, 9 pages
An Improved Nonmonotone Filter Trust Region Method for Equality Constrained Optimization
Department of Mathematics, Shanghai Maritime University, Shanghai 201306, China
Received 22 October 2012; Accepted 11 January 2013
Academic Editor: Nikolaos Papageorgiou
Copyright © 2013 Zhong Jin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Motivated by the method of Su and Pu (2009), we present an improved nonmonotone filter trust region algorithm for solving nonlinear equality constrained optimization. In our algorithm a modified nonmonotone filter technique is proposed and the restoration phase is not needed. At every iteration, in common with the composite-step SQP methods, the step is viewed as the sum of two distinct components, a quasinormal step and a tangential step. A more relaxed accepted condition for trial step is given and a crucial criterion is weakened. Under some suitable conditions, the global convergence is established. In the end, numerical results show our method is effective.
We consider the problem of minimizing a nonlinear function subject to a set of nonlinear equality constraints as follows: where the objective function and the equality constraints are all twice continuously differentiable.
The algorithm that we discuss belongs to the class of trust region methods and, more specifically, to that filter methods suggested first by Fletcher and Leyffer , in which the use of a penalty function, a common feature of the large majority of the algorithms for constrained optimization, is replaced by the technique so-called “filter.” Subsequently, global convergences of the trust region filter SQP methods were established by Fletcher et al. [2, 3], and Ulbrich  got its superlinear local convergence. Especially, the step framework in  is similar in spirit to the composite-step SQP methods pioneered by Vardi , Byrd et al. , and Omojokun . Consequently, filter method has been actually applied in many optimization techniques, for instance, the pattern search method , the SLP method , the interior method , the bundle approaches [11, 12], the system of nonlinear equations and nonlinear least squares , multidimensional filter method , line search filter methods [15, 16], and so on.
In fact, filter method exhibits a certain degree of nonmonotonicity. The idea of nonmonotone technique can be traced back to Grippo et al.  in 1986, combined with the line search strategy. Due to its excellent numerical exhibition, over the last decades, the nonmonotone technique has been used in trust region method to deal with unconstrained and constrained optimization problems [18–25]. More recently, the nonmonotone trust region method without penalty function has also been developed for constrained optimization [26–29]. Especially, in  the nonmonotone idea is employed to the filter technique and restoration phase, a common feature of the large majority of filter methods, is not needed.
Based on , motivated by above ideas and methods, in this paper we present an improved filter algorithm that combines the nonmonotone and trust region techniques for solving nonlinear equality constrained optimization. Our method improves previous results and gives a more flexible mechanism, weakens some needed conditions, and has the merits as follows.(i)An improved nonnomotone filter technique is presented. Filter technique is viewed as a biobjective optimization problem that minimizes objective function and violation function . In , the same nonmonotone measure is utilized for both and at the th iteration. In the paper we improve it and define a new measure for , which may make the discussion and consideration of the nonmonotone properties of and more freely.(ii)The restoration phase is not needed. The restoration procedure has to be considered in most of filter methods. We employ the nonmonotone idea to the filter technique, so as  the restoration phase is not needed.(iii)A more relaxed accepted condition for trial step is considered. Compared to general trust region methods, in  the condition for the accepted trial step is relaxed. In the paper we improve it and then the accepted condition for trial step is more relaxed.(iv)A crucial criterion is weakened in the algorithm. By introducing a new parameter , we improve the crucial criterion for implementation of the method in . It is obvious that our criterion becomes same with  if , but new criterion can be easier to satisfy, so this crucial criterion has been weakened when setting in the initialization of our algorithm.
The presentation is organized as follows. Section 2 introduces some preliminary results and improvements on filter technique and some conditions. Section 3 develops a modified nonmonotone filter trust region algorithm, whose global convergence is shown in Section 4. The results of numerical experience with the proposed method are discussed in the last section.
2. Preliminary and Improvements
In this section, we first recall some definitions and preliminary results about the techniques of composite-step SQP type trust region method and fraction of Cauchy decrease condition. And then the improvements on filter technique and some conditions are given.
2.1. Fraction of Cauchy Decrease Condition
To introduce the corresponding results, we consider the unconstrained optimization problem: , where is continuously differentiable. At iteration point , it obtains a trial step by solving the following quadratic program subproblem: where is a symmetric matrix which is either the Hessian matrix of at or an approximation to it, and is a trust region radius.
To assure the global convergence, the step is only required to satisfy a fraction of Cauchy decrease condition. It means must predict via the quadratic model function at least as much as a fraction of the decreased given by the Cauchy step on , that is, there exists a constant fixed across all iterations, such that where is the steepest descent step for inside the trust region. And we have the following lemma.
Lemma 1. If the trial step satisfies a fraction of Cauchy decrease condition, then
Proof. See Powell  for the proof.
2.2. The Subproblems of Composite-Step SQP Type Trust Region Method
In the composite-step SQP type trust region methods, at current iterate we obtain the trial step by computing a quasi-normal step and a tangential step . The purpose of the quasi-normal step is to improve feasibility and in the tangential space of the linearized constraints can provide sufficient decrease for a quadratic model of the objective function to improve optimality.
For problem (), is the solution to the subproblem where is a trust region radius, , and . In order to improve the value of the objective function, can be obtained by the following subproblem where and . Then we get the current trial step .
In usual ways that impose a trust region in step-decomposition methods, the quasi-normal step and the tangential step are required to satisfy Here, to simplify the proof, we only impose a trust region on and , which is natural.
2.3. The Improved Accepted Condition for
Borrowed from the usual trust region idea, we also need to define the following predicted reduction for the violation function and the actual reduction
To evaluate the descent properties of the step for the objective function, we use the predicted reduction of and the actual reduction of
In general trust region method, the step will be accepted if where is a fixed constant.
But in this paper, we consider more relaxed accepted condition by increasing to and reducing to simultaneously, then the condition (14) is replaced by where is a small positive number.
2.4. The Improved Nonmonotone Filter Technique
In order to obtain next iterate, it needs to determine the step; this procedure that decides which trial step is accepted is “filter method.” For the optimization problem with equality constraints a promising trial step should either reduce the constraint violation or the objective function value . Since , it is easy to see that if and only if is a feasible point. So in the traditional filter method, a point is called acceptance to the filter if and only if where , denotes the filter set.
Different from above criteria of filter idea, with nonmonotone technique, in  a point is called to be acceptable to the filter if and only if where for , and , is a given positive integer, , and there exists a positive constant such that .
Observe that is used in both conditions (19), while in this paper we wish to reduce the relationship of the nonmonotone properties of and . We consider the nonmonotone properties of and , respectively, and call a trial point is acceptable to the filter if and only if where comes from such that and is a subsequence of , and .
Similar to the traditional filter methods, if (20) is satisfied, it is called an -type iteration and the filter set needs to be updated at each successful -type iteration.
2.5. The Weakened Criterion for Implementation of Algorithm
We replace the crucial criterion in  by where . It is obvious that this new criterion becomes the same with  if , but it can be easier to satisfy, so the criterion has been weakened when setting in the initialization of our algorithm.
3. Description of the Algorithm
It will be convenient to introduce the reduced gradient where and denotes a matrix whose columns are from a basis of the null space of . The first order necessary optimality conditions (Karush-Kuhn-Tucher or KKT conditions) at a local solution of () can be written as
For brevity, at the current iterate , we use , , , , , , to denote , , , , , , . We are now in a position to present a formal statement of our algorithm.
Algorithm 2. Step 0. Initialization: choose an initial guess , a symmetric matrix , an initial region radius , , and . Set , , , and . Let , , , and .
Step 1. Compute , , , , , . If , then stop.
Step 2. Compute (5) and (7) to obtain and , and set .
Step 3. If is acceptable to the filter, go to Step 4, otherwise go to Step 5.
Step 4. If and , then go to Step 5, otherwise go to Step 6.
Step 5. , go to Step 2.
Step 6. , . Update to , . If (20) holds, update the filter set; let , and . Let and go to Step 1.
Remark 3. At the beginning of each iteration, we always set , which will avoid too small trust region radius.
Remark 4. If -type iteration is satisfied, then is generated, so if the number of -type iterations is infinite, we obtain a sequence such that and is a subsequence of . Specially, (20) always holds on the whole obtained sequence .
Remark 5. In the above algorithm, let be a positive integer. For each , let satisfying and we also find satisfy so the nonmonotonicity is showed as .
4. Global Convergence of Algorithm
In this section, we will prove the global convergence properties of Algorithm 2 under the following assumptions. The objective and the constraint functions , are all twice continuously differentiable on an open set containing . All points that are sampled by algorithm lie in a nonempty closed and bounded set . The matrix sequence is uniformly bounded. , , and are uniformly bounded on .
From the above assumptions, it is easier to suppose that there exist five constants such that , , , , , , , , ; this can make our following results convenient to prove.
Lemma 6. At the current iterate , let the trial point component actually be normal to the tangential space. Under the above assumptions, there exists a constant independent of the iterates such that
Proof. Since is actually normal to the tangential space, it follows that together with the fact that , we have
Lemma 7. Suppose that the assumptions hold, there exist positive , independent of the iterates such that
Lemma 8. Under problem assumptions, Algorithm 2 is well defined.
Proof. We will show that there exists such that step is accepted whenever . In fact,
where , denotes some point on the line segment from to , , , and . We consider two cases in the following.
Case 1 . To prove the implementation of Algorithm 2, we need only to show that if , it follows . Without loss of generality, we can assume that . Then we start with such that the closed -ball about lies in . It is obvious from that . And from (28), we have . Observe that Hence we have which implies , then for some .
Case 2 . If , then is a KKT-point of (1) by Algorithm 2. Thus, we assume that there exists a constant such that . Then Next, if we reduce such that , then for all , we have . Therefore And it holds from that , then, combining with (29), we find that Hence, which also implies for some . Thus, the trial step is accepted for all .
In the remainder of this paper we denote the set of indices of those iterations in which the filter has been augmented by , that is to say, if the iteration is -type iteration, we have . By this definition, if (20) holds for infinite iterations, then ; otherwise, we can say .
Lemma 9. Let be an infinite sequence generated by Algorithm 2. If , then .
Proof. By mechanism of Algorithm 2, we can assume that for all , it follows
Then we first show that for all , it holds
Next we prove (38) by induction.
If , we have .
Assume that (38) holds for , then we consider that (38) holds for in the following two cases.
Case 1. If , then
Case 2. If , let , then By the fact that , and , we have Then for all , (38) holds.
Moreover, since is bounded below, let , we can get that which implies . Hence the result follows.
Proof. Suppose to contrary that there exist constant and such that for all . Then similar to the proof of Lemma 8, we have
for all . Since , then ; there exists such that
From , we also have , then . Hence
As in the proof of Lemma 8, there exists such that . That is, In common with the proof of Lemma 9, we have which implies . It contradicts (45). Hence the result follows.
Lemma 11. Let be an infinite sequence generated by Algorithm 2. If , then .
Proof. In view of convenience, denote
where . From the algorithm, we know and it holds
Since , we have This implies that converges. Moreover, by , we obtain And since , eventually .
Thus holds by Algorithm 2, then, with the facts and , we have .
Proof. Suppose to the contrary that there exist constant and such that for all and . Then similar to the proof of Lemma 8, we have
for all and . Since , that is, for , then there exists such that
From , we also have for , then for . Thus, for
Similar to the proof of Lemma 10, it is obtained that which implies for all . But for is also true, which contradicts (55). Hence the result follows.
Theorem 13. Suppose is an infinite sequence generated by Algorithm 2, one has Namely, there exists at least one cluster point of that is a KKT point of problem ().
5. Numerical Experiments
In this section, we carry out some numerical experiments based on the algorithm over a set of problems from [31, 32]. The program is written in MATLAB and we test and give the performance of Algorithm 2 on these problems. For each test example, we choose the initial matrix . As to those constants, we set , , , , , , , , , , . We use as the stopping criterion.
During the numerical experiments, updating of is done by where , , and
In the following tables, the notations mean as follows:(i)SC: the problems from Schittkowski ; (ii)HS: the problems from Hock and Schittkowski ;(iii)n: the number of variables; (iv)m: the number of inequality constraints;(v)NIT: the number of iterations;(vi)NF: the number of evaluations of the objective functions;(vii)NG: the number of evaluations of scalar constraint functions; (viii)Algorithm 1: our method in this paper; (ix)Algorithm 2: the method proposed in ;(x)Algorithm 3: the method proposed in .
The detailed numerical results are summarized in Tables 1 and 2. Now we give a brief analysis for numerical test results. From Table 1, we can see that our algorithm executes well for these typical problems taken from [31, 32]. From the computation efficiency in Table 2, by solving same problems in [31, 32] we should point out our algorithm is competitive with the some existed nonmonotone filter type methods to solve equality constrained optimization, for example, [29, 33]. However, in our method an improved nonmonotone filter technique is proposed, which may make the discussion and consideration of the nonmonotone properties of and more freely. Furthermore, we consider a more relaxed accepted condition for trial step and a weakened crucial criterion is presented. It is easier to meet those new criteria, so our method is much more flexible. All results summarized show that our algorithm is promising and numerically effective.
The author would like to thank the anonymous referees for the careful reading and helpful comments, which led to an improved version of the original paper. The research is supported by the Science & Technology Program of Shanghai Maritime University (no. 20120060).
- R. Fletcher and S. Leyffer, “Nonlinear programming without a penalty function,” Mathematical Programming, vol. 91, no. 2, pp. 239–269, 2002.
- R. Fletcher, S. Leyffer, and P. L. Toint, “On the global convergence of a filter-SQP algorithm,” SIAM Journal on Optimization, vol. 13, no. 1, pp. 44–59, 2002.
- R. Fletcher, N. I. M. Gould, S. Leyffer, P. L. Toint, and A. Wächter, “Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 635–659, 2002.
- S. Ulbrich, “On the superlinear local convergence of a filter-SQP method,” Mathematical Programming, vol. 100, no. 1, pp. 217–245, 2004.
- A. Vardi, “A trust region algorithm for equality constrained minimization: convergence properties and implementation,” SIAM Journal on Numerical Analysis, vol. 22, no. 3, pp. 575–591, 1985.
- R. H. Byrd, R. B. Schnabel, and G. A. Shultz, “A trust region algorithm for nonlinearly constrained optimization,” SIAM Journal on Numerical Analysis, vol. 24, no. 5, pp. 1152–1170, 1987.
- E. O. Omojokun, Trust region algorithms for optimization with nonlinear equality and inequality constraints [Ph.D. thesis], University of Colorado, Boulder, Colo, USA, 1989.
- C. Audet and J. E. Dennis,, “A pattern search filter method for nonlinear programming without derivatives,” SIAM Journal on Optimization, vol. 14, no. 4, pp. 980–1010, 2004.
- C. M. Chin and R. Fletcher, “On the global convergence of an SLP-filter algorithm that takes EQP steps,” Mathematical Programming, vol. 96, no. 1, pp. 161–177, 2003.
- M. Ulbrich, S. Ulbrich, and L. N. Vicente, “A globally convergent primal-dual interior-point filter method for nonlinear programming,” Mathematical Programming, vol. 100, no. 2, pp. 379–410, 2004.
- R. Fletcher and S. Leyffer, “A bundle filter method for nonsmooth nonlinear optimization,” Tech. Rep. NA/195, Department of Mathematics, University of Dundee, Dundee, Scotland, 1999.
- E. Karas, A. Ribeiro, C. Sagastizábal, and M. Solodov, “A bundle-filter method for nonsmooth convex constrained optimization,” Mathematical Programming, vol. 116, no. 1-2, pp. 297–320, 2009.
- N. I. M. Gould, S. Leyffer, and P. L. Toint, “A multidimensional filter algorithm for nonlinear equations and nonlinear least-squares,” SIAM Journal on Optimization, vol. 15, no. 1, pp. 17–38, 2004.
- N. I. M. Gould, C. Sainvitu, and P. L. Toint, “A filter-trust-region method for unconstrained optimization,” SIAM Journal on Optimization, vol. 16, no. 2, pp. 341–357, 2006.
- A. Wächter and L. T. Biegler, “Line search filter methods for nonlinear programming: motivation and global convergence,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 1–31, 2005.
- A. Wächter and L. T. Biegler, “Line search filter methods for nonlinear programming: local convergence,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 32–48, 2005.
- L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone line search technique for Newton's method,” SIAM Journal on Numerical Analysis, vol. 23, no. 4, pp. 707–716, 1986.
- Z. W. Chen and X. S. Zhang, “A nonmonotone trust-region algorithm with nonmonotone penalty parameters for constrained optimization,” Journal of Computational and Applied Mathematics, vol. 172, no. 1, pp. 7–39, 2004.
- N. Y. Deng, Y. Xiao, and F. J. Zhou, “Nonmonotonic trust region algorithm,” Journal of Optimization Theory and Applications, vol. 76, no. 2, pp. 259–285, 1993.
- P. L. Toint, “Non-monotone trust-region algorithms for nonlinear optimization subject to convex constraints,” Mathematical Programming, vol. 77, no. 3, pp. 69–94, 1997.
- M. Ulbrich, “Nonmonotone trust-region methods for bound-constrained semismooth equations with applications to nonlinear mixed complementarity problems,” SIAM Journal on Optimization, vol. 11, no. 4, pp. 889–917, 2001.
- D. C. Xu, J. Y. Han, and Z. W. Chen, “Nonmonotone trust-region method for nonlinear programming with general constraints and simple bounds,” Journal of Optimization Theory and Applications, vol. 122, no. 1, pp. 185–206, 2004.
- D. T. Zhu, “A nonmonotonic trust region technique for nonlinear constrained optimization,” Journal of Computational Mathematics, vol. 13, no. 1, pp. 20–31, 1995.
- Z. S. Yu and D. G. Pu, “A new nonmonotone line search technique for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 219, no. 1, pp. 134–144, 2008.
- Z. S. Yu, C. X. He, and Y. Tian, “Global and local convergence of a nonmonotone trust region algorithm for equality constrained optimization,” Applied Mathematical Modelling, vol. 34, no. 5, pp. 1194–1202, 2010.
- M. Ulbrich and S. Ulbrich, “Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function,” Mathematical Programming, vol. 95, no. 1, pp. 103–135, 2003.
- Z. W. Chen, “A penalty-free-type nonmonotone trust-region method for nonlinear constrained optimization,” Applied Mathematics and Computation, vol. 173, no. 2, pp. 1014–1046, 2006.
- N. Gould and P. L. Toint, “Global convergence of a non-monotone trust-region filter algorithm for nonlinear programming,” in Proceedings of Gainesville Conference on Multilevel Optimization, W. Hager, Ed., pp. 129–154, Kluwer, Dordrecht, The Netherlands, 2005.
- K. Su and D. G. Pu, “A nonmonotone filter trust region method for nonlinear constrained optimization,” Journal of Computational and Applied Mathematics, vol. 223, no. 1, pp. 230–239, 2009.
- M. J. D. Powell, “Convergence properties of a class of minimization algorithm,” in Nonlinear Programming 2, O. Margasarian, R. Meyer, and S. Robinson, Eds., pp. 1–27, Academic Press, New York, NY, USA, 1975.
- W. Hock and K. Schittkowski, Test Examples for Nonlinear Programming Codes, vol. 187 of Lecture Notes in Economics and Mathematical Systems, Springer, 1981.
- K. Schittkowski, More Test Examples for Nonlinear Programming Codes, vol. 282 of Lecture Notes in Economics and Mathematical Systems, Springer, Berlin, Germany, 1987.
- K. Su and H. An, “Global convergence of a nonmonotone filter method for equality constrained optimization,” Applied Mathematics and Computation, vol. 218, no. 18, pp. 9396–9404, 2012.