Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 971468, 15 pages
http://dx.doi.org/10.1155/2012/971468
Research Article

A Nonmonotone Line Search Filter Algorithm for the System of Nonlinear Equations

1Department of Mathematics, Shanghai Maritime University, Shanghai 201306, China
2Xianda College of Economics and Humanities, Shanghai International Studies University, Shanghai 200083, China

Received 9 June 2012; Accepted 23 June 2012

Academic Editor: Wei-Chiang Hong

Copyright © 2012 Zhong Jin and Yuqing Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We present a new iterative method based on the line search filter method with the nonmonotone strategy to solve the system of nonlinear equations. The equations are divided into two groups; some equations are treated as constraints and the others act as the objective function, and the two groups are just updated at the iterations where it is needed indeed. We employ the nonmonotone idea to the sufficient reduction conditions and filter technique which leads to a flexibility and acceptance behavior comparable to monotone methods. The new algorithm is shown to be globally convergent and numerical experiments demonstrate its effectiveness.

1. Introduction

We consider the following system of nonlinear equations: 𝑐𝑖(𝑥)=0,𝑖=1,2,,𝑚,(1.1) where each 𝑐𝑖𝑅𝑛𝑅(𝑖=1,2,,𝑚) is a twice continuously differentiable function. It is one of the most basic problems in mathematics and has lots of applications in many scientific fields such as physics, chemistry, and economics.

In the context of solving nonlinear equations, a well-known method is the Newton method, which is known to exhibit local and second order convergence near a regular solution, but its global behavior is unpredictable. To improve the global properties, some important algorithms [1] for nonlinear equations proceed by minimizing a least square problem: min(𝑥)=𝑐(𝑥)𝑇𝑐(𝑥),(1.2) which can be also handled by the Newton method, while Powell [2] gives a counterexample to show a dissatisfactory fact that the iterates generated by the above least square problem may converge to a nonstationary point of (𝑥).

However, as we all know, there are several difficulties in utilizing the penalty functions as a merit function to test the acceptability of the iterates. Hence, the filter, a new concept first introduced by Fletcher and Leyffer [3] for constrained nonlinear optimization problems in a sequential quadratic programming (SQP) trust-region algorithm, replaces the merit fuctions avoiding the penalty parameter estimation and the difficulties related to the nondifferentiability. Furthermore, Fletcher et al. [4, 5] give the global convergence of the trust-region filter-SQP method, then Ulbrich [6] gets its superlinear local convergence. Consequently, filter method has been actually applied in many optimization techniques, for instance the pattern search method [7], the SLP method [8], the interior method [9], the bundle approaches [10, 11], and so on. Also combined with the trust-region search technique, Gould et al. extended the filter method to the system of nonlinear equations and nonlinear least squares in [12], and to the unconstrained optimization problem with multidimensional filter technique in [13]. In addition, Wächter and Biegler [14, 15] presented line search filter methods for nonlinear equality-constrained programming and the global and local convergence were given.

In fact, filter method exhibits a certain degree of nonmonotonicity. The idea of nonmonotone technique can be traced back to Grippo et al. [16] in 1986, combined with the line search strategy. Due to its excellent numerical exhibition, many nonmonotone techniques have been developed in recent years, for example [17, 18]. Especially in [17], a nonmonotone line search multidimensional filter-SQP method for general nonlinear programming is presented based on the Wächter and Biegler methods [14, 15].

Recently, some other ways were given to attack the problem (1.1) (see [1923]). There are two common features in these papers; one is the filter approach is utilized, and the other is that the system of nonlinear equations is transformed into a constrained nonlinear programming problem and the equations are divided into two groups; some equations are treated as constraints and the others act as the objective function. And two groups of equations are updated at every iteration in those methods. For instance combined with the filter line search technique [14, 15], the system of nonlinear equations in [23] becomes the following optimization problem with equality constraints: min𝑖𝑆1𝑐2𝑖(𝑥)s.t.𝑐𝑗(𝑥)=0,𝑗𝑆2.(1.3) The choice of two sets 𝑆1 and 𝑆2 are given as follows: for some positive constant 𝑛0>0, it is defined that 𝑐2𝑖1(𝑥𝑘)𝑐2𝑖2(𝑥𝑘)𝑐2𝑖𝑚(𝑥𝑘), then 𝑆1={𝑖k𝑘𝑛0} and 𝑆2={𝑖𝑘𝑘𝑛0+1}.

In this paper we present an algorithm to solve the system of nonlinear equations, combining the nonmonotone technique and line search filter method. We also divide the equations into two groups; one contains the equations that are treated as equality constraints and the square of other equations is regarded as objective function. But different from those methods in [1923], we just update the two groups at the iterations where it is needed indeed, which can make the scale of the calculation decrease in a certain degree. Another merit of our paper is to employ the nonmonotone idea to the sufficient reduction conditions and filter which leads to a flexibility and acceptance behavior comparable to monotone methods. Moreover, in our algorithm two groups of equations cannot be changed after an f-type iteration, thus in the case that |𝒜|<, the two groups are fixed after finite number of iterations. And the filter should not be updated after an f-type iteration, so naturally the global convergence is discussed, respectively, according to whether the number of updated filter is infinite or not. Furthermore, the global convergent property is induced under some reasonable conditions. In the end, numerical experiments show that the method in this paper is effective.

The paper is outlined as follows. In Section 2, we describe and analyze the nonmonotone line search filter method. In Section 3 we prove the global convergence of the proposed algorithm. Finally, some numerical tests are given in Section 4.

2. A Nonmonotone Line Search Filter Algorithm

Throughout this paper, we use the notations 𝑚𝑘(𝑥)=𝑐𝑆1(𝑥)22=𝑖𝑆1𝑐2𝑖(𝑥) and 𝜃𝑘(𝑥)=𝑐𝑆2(𝑥)22=𝑖𝑆2𝑐2𝑖(𝑥). In addition, we denote the set of indices of those iterations in which the filter has been augmented by 𝒜.

The linearization of the KKT condition of (1.3) at the 𝑘th iteration 𝑥𝑘 is as follows: 𝐵𝑘𝐴𝑘𝑆2𝐴𝑘𝑆2𝑇0𝑠𝑘𝜆+𝑘𝑔=𝑘𝑐𝑘𝑆2,(2.1) where 𝐵𝑘 is the Hessian or approximate Hessian matrix of 𝐿(𝑥,𝜆)=𝑚𝑘(𝑥)+𝜆𝑇𝑐𝑆2(𝑥), 𝐴𝑘𝑆2=𝑐𝑆2(𝑥𝑘) and 𝑔(𝑥𝑘)=𝑚𝑘(𝑥𝑘). Then the iterate formation is 𝑥𝑘(𝛼𝑘,𝑙)=𝑥𝑘+𝛼𝑘,𝑙𝑠𝑘, where 𝑠𝑘 is the solution of (2.1) and 𝛼𝑘,𝑙(0,1] is a step size chosen by line search.

Now we describe the nonmonotone Armijo rule. Let 𝑀 be a nonnegative integer. For each 𝑘, let 𝑚(𝑘) satisfy 𝑚(0)=1,0𝑚(𝑘)min{𝑚(𝑘1)+1,𝑀} for 𝑘1. For fixed constants 𝛾𝑚,𝛾𝜃(0,1), we might consider a trial point to be acceptable, if it leads to sufficient progress toward either goal, that is, if 𝜃𝑘𝑥𝑘𝛼𝑘,𝑙1𝛾𝜃𝜃max𝑘𝑥𝑘,𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝜃𝑘𝑟𝑥𝑘𝑟or𝑚𝑘𝑥𝑘𝛼𝑘,𝑙𝑚max𝑘𝑥𝑘,𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟𝑥𝑘𝑟𝛾𝑚𝜃𝑘𝑥𝑘,(2.2) where 𝜆𝑘𝑟(0,1), 𝑚(𝑘)1𝑟=0𝜆𝑘𝑟=1.

For the convenience we set 𝑚(𝑥𝑘)=max{𝑚𝑘(𝑥𝑘),𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟(𝑥𝑘𝑟)}, and 𝜃(𝑥𝑘)=max{𝜃𝑘(𝑥𝑘),𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝜃𝑘𝑟(𝑥𝑘𝑟)}. In order to avoid the case of convergence to a feasible but nonoptimal point, we consider the following switching condition: 𝑔𝑇𝑘𝑠𝑘<𝜉𝑠𝑇𝑘𝐵𝑘𝑠𝑘,𝛼𝑘,𝑙𝑔𝑇𝑘𝑠𝑘>𝜃𝑘𝑥𝑘𝑠𝜃,(2.3) with 𝜉(0,1],𝑠𝜃(0,1). If the switching condition holds, the trial point 𝑥𝑘(𝛼𝑘,𝑙) has to satisfy the Armijo nonmonotone reduction condition, 𝑚𝑘𝑥𝑘𝛼𝑘,𝑙𝑚𝑥𝑘+𝜏3𝛼𝑘,𝑙𝑔𝑇𝑘𝑠𝑘,(2.4) where 𝜏3(0,1/2) is a fixed constant.

To ensure the algorithm cannot cycle, it maintains a filter, a “taboo region” 𝑘[0,]×[0,] for each iteration 𝑘. The filter contains those combinations of constraint violation value 𝜃 and the objective function value 𝑚, that are prohibited for a successful trial point in iteration 𝑘. During the line search, a trial point 𝑥𝑘(𝛼𝑘,𝑙) is rejected, if (𝜃(𝑥𝑘(𝛼𝑘,𝑙)),𝑚(𝑥𝑘(𝛼𝑘,𝑙)))𝑘. We then say that the trial point is not acceptable to the current filter, which is also called 𝑥𝑘(𝛼𝑘,𝑙)𝑘.

If a trial point 𝑥𝑘(𝛼𝑘,𝑙)𝑘 satisfies the switching condition (2.3) and the reduction condition (2.4), then this trial point is called an f-type point, and accordingly this iteration is called an f-type iteration. An f-type point should be accepted as 𝑥𝑘+1 with no updating of the filter, that is 𝑘+1=𝑘.(2.5)

While if a trial point 𝑥𝑘(𝛼𝑘,𝑙)𝑘 does not satisfy the switching condition (2.3), but this trial point satisfies (2.2), we call it an h-type point, or accordingly an h-type iteration. An h-type point should be accepted as 𝑥𝑘+1 with updating of the filter, that is 𝑘+1=𝑘(𝜃,𝑚)𝑅2𝜃1𝛾𝜃𝜃𝑥𝑘,𝑚𝑚𝑥𝑘𝛾𝑚𝜃𝑘𝑥𝑘.(2.6)

In some cases it is not possible to find a trial step size that satisfies the above criteria. We approximate a minimum desired step size using linear models of the involved functions. For this, we define 𝛼𝑘min=min11𝛾𝜃𝜃𝑥𝑘𝜃𝑘𝑥𝑘,𝑚𝑥𝑘𝑚𝑘𝑥𝑘𝛾𝑚𝜃𝑘𝑥𝑘𝑔𝑇𝑘𝑠𝑘,𝜃𝑘𝑥𝑘𝑠𝜃𝑔𝑇𝑘𝑠𝑘,if𝑔𝑇𝑘𝑠𝑘<𝜉𝑠𝑇𝑘𝐵𝑘𝑠𝑘,11𝛾𝜃𝜃𝑥𝑘𝜃𝑘𝑥𝑘,otherwise.(2.7) If the nonmonotone line search encounters a trial step size with 𝛼𝑘,𝑙<𝛼𝑘min, the algorithm reverts to a feasibility restoration phase. Here, we try to find a new iterate which is acceptable to the current filter and for which (2.2) holds, by reducing the constraint violation with some iterative method.

The corresponding algorithm can be written as follows.

Algorithm 2.1. Step 1. Initialization: choose an initial guess 𝑥0, 𝜌1,𝜌2(0,1), 𝜌1<𝜌2, and 𝜖>0. Compute 𝑔0, 𝑐𝑖(𝑥0), 𝑆01, 𝑆02, and 𝐴𝑘 for 𝑖𝑆02. Set 𝑀>0, 𝑚(0)=1, 𝑘=0, and 0=.
Step 2. If 𝑐(𝑥𝑘)𝜖 then stop.
Step 3. Compute (2.1) to obtain 𝑠𝑘. If there exists no solution to (2.1), go to Step 8. If 𝑠𝑘𝜖 then stop.
Step 4. Use nonmonotone line search. Set 𝑙=0 and 𝛼𝑘,𝑙=1. Step 4.1. If 𝛼𝑘,𝑙<𝛼𝑘min, where the 𝛼𝑘min is obtained by (2.7), go to Step 8. Otherwise we get 𝑥𝑘(𝛼𝑘,𝑙)=𝑥𝑘+𝛼𝑘,𝑙𝑠𝑘. If 𝑥𝑘(𝛼k,𝑙)𝑘, go to Step 4.3. Step 4.2. Check sufficient decrease with respect to current iterate. Step 4.2.1. If the switching condition (2.3) and the nonmonotone reduction condition (2.4) hold, set 𝑘+1=𝑘 and go to Step 5. While only the switching condition (2.3) are satisfied, go to Step 4.3. Step 4.2.2. The switching conditions (2.3) are not satisfied. If the nonmonotone filter condition (2.2) holds, set 𝑥𝑘+1=𝑥𝑘+𝛼𝑘,𝑙𝑠𝑘, augment the filter using (2.6) and go to Step 6. Otherwise, go to Step 4.3.Step 4.3. Choose 𝛼𝑘,𝑙+1[𝜌1𝛼𝑘,𝑙,𝜌2𝛼𝑘,𝑙]. Let 𝑙=𝑙+1 and go to Step 4.1.
Step 5. Set 𝑥𝑘+1=𝑥𝑘+𝛼𝑘,𝑙𝑠𝑘, 𝑆1𝑘+1=𝑆𝑘1 and 𝑆2𝑘+1=𝑆𝑘2. Go to Step 7.
Step 6. Compute 𝑆1𝑘+1 and 𝑆2𝑘+1 by (1.3). If (𝜃𝑘+1(𝑥𝑘+1),𝑚𝑘+1(𝑥𝑘+1))𝑘+1, set 𝑆1𝑘+1=𝑆𝑘1 and 𝑆2𝑘+1=𝑆𝑘2.
Step 7. Compute 𝑔𝑘+1, 𝐵𝑘+1, 𝐴𝑘+1 and 𝑚(𝑘+1)=min{𝑚(𝑘)+1,𝑀}. Let 𝑘=𝑘+1 and go to Step 2.
Step 8 (restoration stage). Find 𝑥𝑟𝑘=𝑥𝑘+𝛼𝑟𝑘𝑠𝑟𝑘 such that 𝑥𝑟𝑘 is acceptable to 𝑥𝑘 and (𝜃𝑘(𝑥𝑟𝑘),𝑚𝑘(𝑥𝑟𝑘))𝑘. Set 𝑥𝑘+1=𝑥𝑟𝑘 and augment the filter by (2.6). Let 𝑘=𝑘+1, 𝑚(𝑘)=1 and go to Step 2.

In a restoration algorithm, the infeasibility is reduced and it is, therefore, desired to decrease the value of 𝜃𝑘(𝑥). The direct way is to utilize the Newton method or the similar ways to attack 𝜃𝑘(𝑥+𝑠)=0. We now give the restoration algorithm.

Restoration Algorithm
Step R1. Let 𝑥0𝑘=𝑥𝑘, 𝐻0=𝐸𝑛, Δ0𝑘=Δ0, 𝑔𝜃=𝜃𝑘(𝑥), 𝑗=0, 𝜂1=0.25, 𝜂2=0.75. Step R2. If 𝑥𝑗𝑘 is acceptable to 𝑥𝑘 and (𝜃𝑘(𝑥𝑟𝑘),𝑚𝑘(𝑥𝑟𝑘))𝑘, then let 𝑥𝑟𝑘=𝑥𝑗𝑘 and stop. Step R3. Compute min𝑔𝑇𝜃1𝑠+2𝑠𝑇𝐻𝑗𝑠s.t.𝑠Δ𝑗𝑘(2.8)to get 𝑠𝑗𝑘. Let 𝑟𝑗𝑘=(𝜃𝑘(𝑥𝑗𝑘)𝜃𝑘(𝑥𝑗𝑘+𝑠𝑗𝑘))/(𝑔𝑇𝜃𝑠𝑗𝑘(1/2)𝑠𝑗𝑘𝑇𝐻𝑗𝑠𝑗𝑘). Step R4. If 𝑟𝑗𝑘𝜂1, set Δ𝑘𝑗+1=(1/2)Δ𝑗𝑘; If 𝑟𝑗𝑘𝜂2, set Δ𝑘𝑗+1=2Δ𝑗𝑘; otherwise, Δ𝑘𝑗+1=Δ𝑗𝑘. Let 𝑥𝑘𝑗+1=𝑥𝑗𝑘+𝑠𝑗𝑘, 𝐻𝑗 be updated to 𝐻𝑗+1, 𝑗=𝑗+1 and go to Step R2.

The above restoration algorithm is an SQP method for 𝜃𝑘(𝑥+𝑠)=0. Of course, there are other restoration algorithms, such as the Newton method, interior point restoration algorithm, SLP restoration algorithm, and so on.

3. Global Convergence of Algorithm

In this section, we present a proof of global convergence of Algorithm 2.1. We first state the following assumptions in technical terms.

Assumptions. (A1) All points 𝑥𝑘 that are sampled by algorithm lie in a nonempty closed and bounded set 𝑋.
(A2) The functions 𝑐𝑖(𝑥), 𝑗=1,2,,𝑚 are all twice continuously differentiable on an open set containing 𝑋.
(A3) There exist two constants 𝑏𝑎>0 such that the matrices sequence {𝐵𝑘} satisfies 𝑎𝑠2𝑠𝑇𝐵𝑘𝑠𝑏𝑠2 for all 𝑘 and 𝑠𝑅𝑛.
(A4) (𝐴𝑘𝑠2)𝑇 has full column rank and 𝑠𝑘𝛾𝑠 for all 𝑘 with a positive constant 𝛾𝑠.

In the remainder of this section, we will not consider the case where Algorithm 2.1 terminates successfully in Step 2, since in this situation the global convergence is trivial.

Lemma 3.2. Under Assumption A1, there exists the solution to (2.1) with exact (or inexact) line search which satisfies the following descent conditions: ||𝜃𝑘𝑥𝑘+𝛼𝑠𝑘(12𝛼)𝜃𝑘𝑥𝑘||𝜏1𝛼2𝑠𝑘2,||𝑚(3.1)𝑘𝑥𝑘+𝛼𝑠𝑘𝑚𝑘𝑥k𝛼𝑔𝑇𝑘𝑠𝑘||𝜏2𝛼2𝑠𝑘2,(3.2) where 𝛼(0,1), 𝜏1 and 𝜏2 are all positive constants independent of 𝑘.

Proof. By virtue of the Taylor expansion of 𝑐2𝑖(𝑥𝑘+𝛼𝑠𝑘) with 𝑖𝑆2, we obtain |||𝑐2𝑖𝑥𝑘+𝛼𝑠𝑘𝑐2𝑖𝑥𝑘2𝛼𝑐𝑖𝑥𝑘𝑐𝑖𝑥𝑘𝑇𝑠𝑘|||=|||𝑐2𝑖𝑥𝑘+𝛼𝑠𝑘𝑐2𝑖𝑥𝑘2𝑐𝑖𝑥𝑘𝑐𝑖𝑥𝑘𝑇𝛼𝑠𝑘|||=|||12𝛼𝑠𝑘𝑇2𝑐𝑖𝑥𝑘+𝜁𝛼𝑠𝑘𝑐2𝑖𝑥𝑘+𝜁𝛼𝑠𝑘+2𝑐𝑖𝑥𝑘+𝜁𝛼𝑠𝑘𝑐𝑖𝑥𝑘+𝜁𝛼𝑠𝑘𝑇𝛼𝑠𝑘|||=|||𝛼2𝑠𝑇𝑘𝑐𝑖𝑥𝑘+𝜁𝛼𝑠𝑘c2𝑖𝑥𝑘+𝜁𝛼𝑠𝑘+𝑐𝑖𝑥𝑘+𝜁𝛼𝑠𝑘𝑐𝑖𝑥𝑘+𝜁𝛼𝑠𝑘𝑇𝑠𝑘|||1𝑚𝜏1𝛼2𝑠𝑘2,(3.3) where the last inequality can be done by Assumption A1 and 𝜁[0,1]. Furthermore, from (2.1) we immediately obtain 𝑐𝑖(𝑥𝑘)+𝑐𝑖(𝑥𝑘)𝑇𝑠𝑘=0, that is, 2𝛼𝑐2𝑖(𝑥𝑘)2𝛼𝑐𝑖(𝑥𝑘)𝑐𝑖(𝑥𝑘)𝑇𝑠𝑘=0. With |𝑆2|𝑚, thereby, ||𝜃𝑘𝑥𝑘+𝛼𝑠𝑘(12𝛼)𝜃𝑘𝑥𝑘||=|||||𝑖𝑆2𝑐2𝑖𝑥𝑘+𝛼𝑠𝑘(12𝛼)𝑐2𝑖𝑥𝑘|||||𝑖𝑆2||𝑐2𝑖𝑥𝑘+𝛼𝑠𝑘(12𝛼)𝑐2𝑖𝑥𝑘||=𝑖𝑆2|||𝑐2𝑖𝑥𝑘+𝛼𝑠𝑘𝑐2𝑖𝑥𝑘2𝛼𝑐𝑖𝑥𝑘𝑐𝑖𝑥𝑘𝑇𝑠𝑘|||1𝑚𝑚𝜏1𝛼2𝑠𝑘2𝜏1𝛼2𝑠𝑘2,(3.4) then the first inequality consequently holds.
According to the Taylor expansion of 𝑖𝑆1(𝑐2𝑖(𝑥𝑘+𝛼𝑠𝑘)) (i.e., 𝑚𝑘(𝑥𝑘+𝛼𝑠𝑘)), we then have |||||𝑖𝑆1𝑐2𝑖𝑥𝑘+𝛼𝑠𝑘𝑖𝑆1𝑐2𝑖𝑥𝑘𝛼𝑔𝑇𝑘𝑠𝑘|||||=|||||12𝛼2𝑠𝑘𝑇2𝑖𝑆1𝑐2𝑖𝑥𝑘+𝜚𝛼𝑠𝑘𝑠𝑘|||||𝜏2𝛼2𝑠𝑘2,(3.5) where the last inequality follows from Assumption A1 and 𝜚[0,1]. That is to say, ||𝑚𝑘𝑥𝑘+𝛼𝑠𝑘𝑚𝑘𝑥𝑘𝛼𝑔𝑇𝑘𝑠𝑘||𝜏2𝛼2𝑠𝑘2,(3.6) which is just (3.2).

Lemma 3.3. Let {𝑥𝑘𝑖} be a subsequence of iterates for which (2.3) holds and has the same 𝑆1 and 𝑆2. Then there exists some 𝛼(0,1] such that 𝑚𝑘𝑖𝑥𝑘𝑖+𝛼𝑠𝑘𝑖𝑚𝑘𝑖𝑥𝑘𝑖+𝛼𝜏3𝑔𝑇𝑘𝑖𝑠𝑘𝑖.(3.7)

Proof. Because {𝑥𝑘𝑖} have the same 𝑆1 and 𝑆2, it follows that 𝑚𝑘𝑖(𝑥) are fixed and by (2.3) 𝑑𝑘𝑖 is a decent direction. Hence there exists some 𝛼(0,1] satisfying (3.7).

Theorem 3.4. Suppose that {𝑥𝑘} is an infinite sequence generated by Algorithm 2.1 and |𝒜|<, one has lim𝑘𝑐𝑘𝑆𝑘2+𝑠𝑘=0,(3.8) namely, every limit point is the 𝜖 solution to (1.1) or a local infeasible point. If the gradients of 𝑐𝑖(𝑥𝑘) are linear independent for all 𝑘 and 𝑖=1,2,,𝑚, then the solution to SNE is obtained.

Proof. From |𝒜|<, we know the filter updates in a finite number, then there exists 𝐾, for 𝑘>𝐾 the filter does not update. As h-type iteration and restoration algorithm all need the updating of the filter, so for 𝑘>𝐾 our algorithm only follows the f-type iterations. We then have that for all 𝑘>𝐾 both conditions (2.3) and (2.4) are satisfied for 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑠𝑘 and 𝑚𝑘(𝑥)=𝑚𝐾(𝑥).
Then by (2.4) we get 𝑚𝑘(𝑥𝑘+1)max{𝑚𝑘(𝑥𝑘),𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟(𝑥𝑘𝑟)}+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘. We first show that for all 𝑘𝐾+1, it holds 𝑚𝑘𝑥𝑘<𝑚𝑥𝐾+𝜆𝜏3𝑘2𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘1𝑔𝑇𝑘1𝑠𝑘1<𝑚𝑥𝐾+𝜆𝜏3𝑘1𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟,(3.9) where 𝑚(𝑥𝐾)=max{𝑚𝐾(𝑥𝐾),𝑚(𝑘)1𝑟=0𝜆𝐾𝑟𝑚𝐾𝑟(𝑥𝐾𝑟)}. We prove (3.9) by induction.
If 𝑘=𝐾+1, we have 𝑚𝐾+1(𝑥𝐾+1)<𝑚(𝑥𝐾)+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘<𝑚(𝑥𝐾)+𝜆𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘. Suppose that the claim is true for 𝐾+1,𝐾+2,,𝑘, then we consider two cases.
Case 1. If max{𝑚𝑘(𝑥𝑘),𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟(𝑥𝑘𝑟)}=𝑚𝑘(𝑥𝑘), it is clear that 𝑚𝑘+1𝑥𝑘+1<𝑚𝑘𝑥𝑘+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘<𝑚𝑥𝐾+𝜆𝜏3𝑘1𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘𝑚𝑥𝐾+𝜆𝜏3𝑘𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟.(3.10)
Case 2. If max{𝑚𝑘(𝑥𝑘),𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟(𝑥𝑘𝑟)}=𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟(𝑥𝑘𝑟), let 𝑢=𝑚(𝑘)1. By the fact that  𝑢𝑡=0𝜆𝑘𝑡=1, 𝜆𝜆𝑘𝑡<1, we have 𝑚𝑘+1𝑥𝑘+1<𝑢𝑡=0𝜆𝑘𝑡𝑚𝑘𝑡𝑥𝑘𝑡+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘<𝑢𝑡=0𝜆𝑘𝑡𝑚𝑥𝐾+𝜆𝜏3𝑘𝑡2𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘𝑡1𝑔𝑇𝑘𝑡1𝑠𝑘𝑡1+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘=𝜆𝑘0𝑚𝑥𝐾+𝜆𝜏3𝑘𝑢2𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜆𝜏3𝑘2𝑟=𝑘𝑢1𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘1𝑔𝑇𝑘1𝑠𝑘1+𝜆𝑘1𝑚𝑥𝐾+𝜆𝜏3𝑘𝑢2𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜆𝜏3𝑘3𝑟=𝑘𝑢1𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘2𝑔𝑇𝑘2𝑠𝑘2++𝜆𝑘𝑢𝑚𝑥𝐾+𝜆𝜏3𝑘𝑢2𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘𝑢1𝑔𝑇𝑘𝑢1𝑠𝑘𝑢1+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘<𝑚𝑥𝐾+𝜆𝜏3𝑘𝑢2𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜆𝜏3𝑘1𝑟=𝑘𝑢1𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘=𝑚𝑥𝐾+𝜆𝜏3𝑘1𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟+𝜏3𝛼𝑘𝑔𝑇𝑘𝑠𝑘<𝑚𝑥𝐾+𝜆𝜏3𝑘𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟.(3.11) Moreover, since 𝑚𝑘(𝑥𝑘) is bounded below as 𝑘, we get 𝑘𝑟=𝐾𝛼𝑟𝑔𝑇𝑟𝑠𝑟<, that is, lim𝑘𝛼𝑟𝑔𝑇𝑟𝑠𝑟=0. By Lemma 3.3, there exists a 𝛼(0,1] such that 𝛼𝑘𝛼. Then together with 𝑔𝑇𝑘𝑠𝑘<𝜉𝑠𝑇𝑘𝐵𝑘𝑠𝑘 and Assumption A1, we have lim𝑘𝑠𝑘=0. From 𝛼𝑘,𝑙𝑔𝑇𝑘𝑠𝑘>[𝜃𝑘(𝑥𝑘)]𝑠𝜃 it is easy to obtain that lim𝑘𝜃𝑘(𝑥𝑘)=0. This completes the proof.

Lemma 3.5. Under Assumptions A1 and A2, if 𝑔𝑇𝑘𝑠𝑘𝜀0 for a positive constant 𝜀0 independent of 𝑘(𝑎𝑠𝑢𝑏𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒) and for all 𝛼(0,1] and 𝛼𝛼min𝑘,𝑙 with (𝜃𝑘(𝑥𝑘),𝑚𝑘(𝑥𝑘))𝑘, then there exists 𝛾1, 𝛾2>0 so that (𝜃𝑘(𝑥𝑘+𝛼𝑠𝑘),𝑚𝑘(𝑥𝑘+𝛼𝑠𝑘))𝑘 for all 𝑘(𝑎𝑠𝑢𝑏𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒) and 𝛼min{𝛾1,𝛾2𝜃𝑘(𝑥𝑘)}.

Proof. Choose 𝛾1=𝜀0/𝜏2𝛾2𝑠, then 𝛼𝛾1 implies that 𝛼𝜀0+𝜏2𝛼2𝛾2𝑠0. So we note from (3.2) that 𝑚𝑘𝑥𝑘+𝛼𝑠𝑘𝑚𝑘𝑥𝑘+𝛼𝑔𝑇𝑘𝑠𝑘+𝜏2𝛼2𝑠𝑘2𝑚𝑘𝑥𝑘𝛼𝜀0+𝜏2𝛼2𝛾2𝑠𝑚𝑘𝑥𝑘.(3.12)
Let 𝛾2=2/𝜏1𝛾2𝑠, then 𝛼𝛾2𝜃𝑘(𝑥𝑘) implies that 2𝛼𝜃𝑘(𝑥𝑘)+𝜏1𝛼2𝛾2𝑠0. So from (3.1), we obtain 𝜃𝑘𝑥𝑘+𝛼𝑠𝑘𝜃𝑘𝑥𝑘2𝛼𝜃𝑘𝑥𝑘+𝜏2𝛼2𝑠𝑘2𝜃𝑘𝑥𝑘2𝛼𝜃𝑘𝑥𝑘+𝜏1𝛼2𝛾2𝑠𝜃𝑘𝑥𝑘.(3.13)
We further point a fact according to the definition of filter. If (𝜃,𝑚)𝑘 and 𝜃𝜃, 𝑚𝑚, we obtain (𝜃,𝑚)𝑘. Thus from (𝜃𝑘(𝑥𝑘),𝑚𝑘(𝑥𝑘))𝑘, 𝑚𝑘(𝑥𝑘+𝛼𝑠𝑘)𝑚𝑘(𝑥𝑘), and 𝜃𝑘(𝑥𝑘+𝛼𝑠𝑘)𝜃𝑘(𝑥𝑘), we have (𝜃𝑘(𝑥𝑘+𝛼𝑠𝑘),𝑚𝑘(𝑥𝑘+𝛼𝑠𝑘))𝑘.

Lemma 3.6. If 𝑔𝑇𝑘𝑠𝑘𝜀0 for a positive constant 𝜀0 independent of 𝑘(𝑎𝑠𝑢𝑏𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒), then there exists a constant 𝛼>0, for all 𝑘(𝑎𝑠𝑢𝑏𝑠𝑒𝑞𝑢𝑒𝑛𝑐𝑒) and 𝛼𝛼 such that 𝑚𝑘𝑥𝑘+𝛼𝑠𝑘𝑚max𝑘𝑥𝑘,𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟𝑥𝑘𝑟𝜏3𝛼𝑔𝑇𝑘𝑠𝑘.(3.14)

Proof. Let 𝛼=(1𝜏3)𝜀0/𝜏2𝛾2𝑠. In view of (3.2), 𝑠𝑘𝛾𝑠 and 𝛼𝛼, we know 𝑚𝑘𝑥𝑘+𝛼𝑠𝑘𝑚max𝑘𝑥𝑘,𝑚(𝑘)1𝑟=0𝜆𝑘𝑟𝑚𝑘𝑟𝑥𝑘𝑟𝛼𝑔𝑇𝑘𝑠𝑘𝑚𝑘𝑥𝑘+𝛼𝑠𝑘𝑚𝑘𝑥𝑘𝛼𝑔𝑇𝑘𝑠𝑘𝜏2𝛼2𝑠𝑘2𝜏2𝛼𝛼𝛾2𝑠=1𝜏3𝛼𝜀01𝜏3𝛼𝑔𝑇𝑘𝑠𝑘,(3.15) which shows that the assertion of the lemma follows.

Theorem 3.7. Suppose that {𝑥𝑘} is an infinite sequence generated by Algorithm 2.1 and |𝒜|=. Then there exists at least one accumulation which is the 𝜖 solution to (1.1) or a local infeasible point. Namely, one has lim𝑘𝑐inf𝑘𝑆𝑘2+𝑠𝑘=0.(3.16) If the gradients of 𝑐𝑖(𝑥𝑘) are linear independent for all 𝑘 and 𝑖=1,2,,𝑚, then the solution to (1.1) is obtained.

Proof. We prove that lim𝑘,𝑘𝒜𝜃𝑘(𝑥𝑘)=0 first.
Suppose by contradiction that there exits an infinite subsequence {𝑘𝑖} of 𝒜 such that 𝜃𝑘𝑖(𝑥𝑘𝑖)𝜀 for some 𝜀>0. At each iteration 𝑘𝑖, (𝜃𝑘𝑖(𝑥𝑘𝑖),𝑚𝑘𝑖(𝑥𝑘𝑖)) is added to the filter which means that no other (𝜃,𝑚) can be added to the filter at a later stage within the area: 𝜃𝑥𝑘𝑖𝛾𝜃𝜀,𝜃𝑥𝑘𝑖×𝑚𝑥𝑘𝑖𝛾𝑚𝜀,𝑚𝑥𝑘𝑖,(3.17) and the area of the each of these squares is at least 𝛾𝜃𝛾𝑚𝜀2.
By Assumption A1 we have 𝑛𝑖=1𝑐2𝑖(𝑥𝑘)𝑀max. Since 0𝑚𝑘(𝑥𝑘)𝑚𝑘(𝑥𝑘)+𝜃𝑘(𝑥𝑘)=𝑛𝑖=1𝑐2𝑖(𝑥𝑘) and 0𝜃𝑘(𝑥𝑘)𝑚𝑘(𝑥𝑘)+𝜃𝑘(𝑥𝑘)=𝑛𝑖=1𝑐2𝑖(𝑥𝑘), then (𝜃,𝑚) associated with the filter are restricted to =0,𝑀max×0,𝑀max.(3.18)
Thereby is completely covered by at most a finite number of such areas in contraction to the infinite subsequence {𝑘𝑖} satisfying 𝜃𝑘𝑖(𝑥𝑘𝑖)𝜀. Therefore, lim𝑘,𝑘𝒜𝜃𝑘(𝑥𝑘)=0.
By Assumption A1 and |𝒜|=, there exits an accumulation point 𝑥, that is, lim𝑖𝑥𝑘𝑖=𝑥, 𝑘𝑖𝒜. It follows from lim𝑘,𝑘𝒜𝜃𝑘(𝑥𝑘)=0, that lim𝑖𝜃𝑘𝑖𝑥𝑘𝑖=0,(3.19) which implies lim𝑖𝑐𝑘𝑖𝑆𝑘𝑖2=0. If lim𝑖𝑠𝑘𝑖=0, then (3.16) is true. Otherwise, there exists a subsequence {𝑥𝑘𝑖𝑗} of {𝑥𝑘𝑖} and a constant 𝜀1>0 so that for all 𝑘𝑖𝑗, 𝑠𝑘𝑖𝑗𝜀1.(3.20) The choice of {𝑥𝑘𝑖𝑗} implies 𝑘𝑖𝑗𝒜forall𝑘𝑖𝑗.(3.21) According to 𝑠𝑘𝑖𝑗𝜀1, Assumption A1 as well as 𝜉(0,1), we have 𝑔𝑇𝑘𝑖𝑗𝑠𝑘𝑖𝑗+𝜉𝑠𝑇𝑘𝑖𝑗𝐵𝑘𝑖𝑗𝑠𝑘𝑖𝑗=(𝜉1)𝑠𝑇𝑘𝑖𝑗𝐵𝑘𝑖𝑗𝑠𝑘𝑖𝑗𝜆+𝑘𝑖𝑗𝑇𝑐𝑘𝑖𝑗𝑆𝑘𝑖𝑗2𝑠(𝜉1)𝑎𝑘𝑖𝑗2+𝑐1𝑐𝑘𝑖𝑗𝑆𝑘𝑖𝑗2(𝜉1)𝑎𝜀21+𝑐1𝑐𝑘𝑖𝑗𝑆𝑘𝑖𝑗2.(3.22) Since 𝜉1<0 and 𝑐𝑘𝑖𝑗𝑆𝑘𝑖𝑗20 as 𝑗, we obtain 𝑔𝑇𝑘𝑖𝑗𝑠𝑘𝑖𝑗𝜉𝑠𝑇𝑘𝑖𝑗𝐵𝑘𝑖𝑗𝑠𝑘𝑖𝑗,(3.23) for sufficiently large 𝑗. Similarly, we have 𝛼𝑔𝑇𝑘𝑖𝑗𝑠𝑘𝑖𝑗+𝜃𝑘𝑥𝑘𝑠𝜃𝛼𝑠𝑇𝑘𝑖𝑗𝐵𝑘𝑖𝑗𝑠𝑘𝑖𝑗+𝑐1𝑐𝑘𝑖𝑗𝑘𝑖𝑗2+𝜃𝑘𝑥𝑘𝑠𝜃𝛼𝑎𝜀21+𝑐1𝑐𝑘𝑖𝑗𝑘𝑖𝑗2+𝜃𝑘𝑥𝑘𝑠𝜃,(3.24) and thus 𝛼𝑔𝑇𝑘𝑖𝑗𝑠𝑘𝑖𝑗𝜃𝑘𝑥𝑘𝑠𝜃,(3.25) for sufficiently large 𝑗. This means the condition (2.3) is satisfied for sufficiently large 𝑗. Therefore, the reason for accepting 𝑥𝑘+1 must been that 𝑥𝑘+1 satisfies nonmonotone Armijo condition (2.4). In fact let 𝜀0=𝜉𝑎𝜀21, then 𝑔𝑇𝑘𝑖𝑗𝑠𝑘𝑖𝑗𝜉𝑠𝑇𝑘𝑖𝑗𝐵𝑘𝑖𝑗𝑠𝑘𝑖𝑗𝜉𝑎𝜀21=𝜀0; by Lemma 3.6 we obtain nonmonotone Armijo condition (2.4) is satisfied. Consequently, the filter is not augmented in iteration 𝑘𝑖𝑗 which is a contraction to (3.21). The whole proof is completed.

4. Numerical Experiments

In this section, we test our algorithm on some typical test problems. In the whole process, the program is coded in MATLAB and we assume the error tolerance 𝜖 in this paper is always 1.0𝑒5. The selected parameter values are 𝛾𝜃=0.1, 𝛾𝑚=0.1, 𝑠𝜃=0.9, 𝜌1=0.25, 𝜌2=0.75, and 𝑀=3. In the following tables, the notations NIT, NOF, and NOG mean the number of iterates, number of functions, and number of gradients, respectively.

Example 4.1. Find a solution of the nonlinear equations system as follows: 𝑥+3𝑦2(=00𝑥1.0)𝑦.(4.1)
The only solution of Example 4.1 is (𝑥,𝑦)=(0,0). Define the line Γ={(1,𝑦)𝑦}. If the starting point (𝑥0,𝑦0)Γ, the Newton method [24] are confined to Γ. We choose two starting points which belong to Γ in the experiments and then the (𝑥,𝑦) is obtained. Table 1 shows the results.

tab1
Table 1: Numerical results of Example 4.1.

Example 4.2. Consider the system of nonlinear equations: 𝑐1(𝑥)=𝑥31𝑥32+𝑥33𝑐1,2(𝑥)=𝑥21+𝑥22𝑥23𝑐1,3(𝑥)=𝑥1+𝑥2+𝑥33.(4.2)
The solution to Example 4.2 is 𝑥=(1,1,1)𝑇. The numerical results of Example 4.2 are given in Table 2.

tab2
Table 2: Numerical results of Example 4.2.

Example 4.3. Find a solution of the nonlinear equations system: 𝑥10𝑥(𝑥+0.1)+2𝑦2=00.(4.3)
The unique solution is (𝑥,𝑦)=(0,0). It has been proved in [2] that, under initial point (𝑥0,𝑦0)=(3,1), the iterates converge to the point 𝑧=(1.8016,0.0000), which is not a stationary point. Utilizing our algorithm, a sequence of points converging to (𝑥,𝑦) is obtained. The detailed numerical results for Example 4.3 are listed in Table 3.

tab3
Table 3: Numerical results of Example 4.3.

Example 4.4. Consider the following system of nonlinear equations: 𝑐1(𝑥)=𝑥21+𝑥1𝑥2+2𝑥22𝑥1𝑥2𝑐2,2(𝑥)=2𝑥21+𝑥1𝑥2+3𝑥22𝑥1𝑥24.(4.4)
There are three solutions of above example, (1,1)𝑇, (1,1)𝑇, and (1,1)𝑇. The numerical results of Example 4.4 are given in Table 4.

tab4
Table 4: Numerical results of Example 4.4.

Example 4.5. Consider the system of nonlinear equations: 𝑐𝑖(𝑥)=(𝑁+1)+2𝑥𝑖+𝑁𝑗=1,𝑗𝑖𝑥𝑗𝑐,𝑖=1,2,,𝑁1,(4.5)𝑁(𝑥)=1+𝑁𝑗=1𝑥𝑗,(4.6) with the initial point 𝑥𝑖(0)=0.5, 𝑖=1,2,,𝑁. The solution to Example 4.5 is 𝑥=(1,1,,1)𝑇. The numerical results of Example 4.5 are given in Table 5.

tab5
Table 5: Numerical results of Example 4.5.

Refer to these above problems, running the Algorithm 2.1 with different starting points yields the results in the corresponding tables, which, summarized, show that our proposed algorithm is practical and effective. From the computation efficiency, we should point out our algorithm is competitive with the method in [22]. The results in Table 5 in fact show that our method also succeeds well to solve the cases when more equations are active.

Constrained optimization approaches attacking the system of nonlinear equations are exceedingly interesting and are further developed by using the nonmonotone line search filter strategy in this paper. Moreover, the local property of the algorithm is a further topic of interest.

Acknowledgment

The research is supported by the National Natural Science Foundation of China (no. 11126060) and Science & Technology Program of Shanghai Maritime University (no. 20120060).

References

  1. J. Nocedal and S. J. Wright, Numerical Optimization, Springer Series in Operations Research, Springer, New York, NY, USA, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. M. J. D. Powell, “A hybrid method for nonlinear equations,” in Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, Ed., pp. 87–114, Gordon and Breach, London, UK, 1970. View at Google Scholar · View at Zentralblatt MATH
  3. R. Fletcher and S. Leyffer, “Nonlinear programming without a penalty function,” Mathematical Programming, vol. 91, no. 2, pp. 239–269, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. R. Fletcher, S. Leyffer, and P. L. Toint, “On the global convergence of a filter-SQP algorithm,” SIAM Journal on Optimization, vol. 13, no. 1, pp. 44–59, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. R. Fletcher, N. I. M. Gould, S. Leyffer, P. L. Toint, and A. Wächter, “Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming,” SIAM Journal on Optimization, vol. 13, no. 3, pp. 635–659, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. S. Ulbrich, “On the superlinear local convergence of a filter-SQP method,” Mathematical Programming, vol. 100, no. 1, pp. 217–245, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  7. C. Audet and J. E. Dennis Jr., “A pattern search filter method for nonlinear programming without derivatives,” SIAM Journal on Optimization, vol. 14, no. 4, pp. 980–1010, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. C. M. Chin and R. Fletcher, “On the global convergence of an SLP-filter algorithm that takes EQP steps,” Mathematical Programming, vol. 96, no. 1, pp. 161–177, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. M. Ulbrich, S. Ulbrich, and L. N. Vicente, “A globally convergent primal-dual interior filter method for nonconvex nonlinear programming,” Mathematical Programming, vol. 100, no. 2, pp. 379–410, 2003. View at Publisher · View at Google Scholar
  10. R. Fletcher and S. Leyffer, “A bundle filter method for nonsmooth nonlinear optimization,” Tech. Rep. NA/195, Department of Mathematics, University of Dundee, Scotland, UK, 1999. View at Google Scholar
  11. E. Karas, A. Ribeiro, C. Sagastizábal, and M. Solodov, “A bundle-filter method for nonsmooth convex constrained optimization,” Mathematical Programming, vol. 116, no. 1-2, pp. 297–320, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. N. I. M. Gould, S. Leyffer, and P. L. Toint, “A multidimensional filter algorithm for nonlinear equations and nonlinear least-squares,” SIAM Journal on Optimization, vol. 15, no. 1, pp. 17–38, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. N. I. M. Gould, C. Sainvitu, and P. L. Toint, “A filter-trust-region method for unconstrained optimization,” SIAM Journal on Optimization, vol. 16, no. 2, pp. 341–357, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. A. Wächter and L. T. Biegler, “Line search filter methods for nonlinear programming: motivation and global convergence,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 1–31, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. A. Wächter and L. T. Biegler, “Line search filter methods for nonlinear programming: local convergence,” SIAM Journal on Optimization, vol. 16, no. 1, pp. 32–48, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. L. Grippo, F. Lampariello, and S. Lucidi, “A nonmonotone line search technique for Newton's method,” SIAM Journal on Numerical Analysis, vol. 23, no. 4, pp. 707–716, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. C. Gu and D. T. Zhu, “A non-monotone line search multidimensional filter-SQP method for general nonlinear programming,” Numerical Algorithms, vol. 56, no. 4, pp. 537–559, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. Z. S. Yu and D. G. Pu, “A new nonmonotone line search technique for unconstrained optimization,” Journal of Computational and Applied Mathematics, vol. 219, no. 1, pp. 134–144, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. R. Fletcher and S. Leyffer, “Filter-type algorithms for solving systems of algebraic equations and inequalities,” Dundee Numerical Analysis Report NA/204, 2001. View at Google Scholar
  20. P.-Y. Nie, “A null space method for solving system of equations,” Applied Mathematics and Computation, vol. 149, no. 1, pp. 215–226, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. P.-Y. Nie, “CDT like approaches for the system of nonlinear equations,” Applied Mathematics and Computation, vol. 172, no. 2, pp. 892–902, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  22. P.-Y. Nie, “An SQP approach with line search for a system of nonlinear equations,” Mathematical and Computer Modelling, vol. 43, no. 3-4, pp. 368–373, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  23. P.-Y. Nie, M.-Y. Lai, S.-J. Zhu, and P.-A. Zhang, “A line search filter approach for the system of nonlinear equations,” Computers & Mathematics with Applications, vol. 55, no. 9, pp. 2134–2141, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. R. H. Byrd, M. Marazzi, and J. Nocedal, “On the convergence of Newton iterations to non-stationary points,” Mathematical Programming, vol. 99, no. 1, pp. 127–148, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH