#### Abstract

The nonlinear complementarity problem can be reformulated as a nonlinear programming whose objective function may be nonsmooth. For this case, we use decomposition strategy to decompose the nonsmooth function into a smooth one and a nonsmooth one. Together with filter method, we present an improved filter algorithm with decomposition strategy for solving nonlinear complementarity problem, which has better numerical results compared to the method that without the filter technique. Under mild conditions, the global convergent property is shown. In the end, the numerical example is reported.

#### 1. Introduction

The nonlinear complementarity problem (NCP) is to find a point , such that where , is an function and is local Lipschitzian. is the dimension of the variables.

The nonlinear complementarity problem has been utilized as a general framework for quadratic programming, linear complementarity, and other mathematical programming. A variety of methods have been proposed for solving it. One of the powerful approaches is reformulating the nonlinear complementarity problem as an equivalent unconstrained optimization problem [1, 2] or as an equivalent system of nonlinear equation [3, 4]. For this case, a merit function for NCP is needed, whose global minima are coincide with the solutions of the NCP.

To construct a merit function, many kinds of NCP functions appear to lead to a system of equations. A function is called an NCP function if it satisfies

Then we get that the NCP function with the form is equivalent to (1) to a certain degree. Many algorithms based on (3) have been proposed, for example, Newton’s method and generalized Newton’s approaches [5, 6].

In this paper, we use Fischer-Burmeister as NCP function, which is called F-B NCP function and given by

It is easy to check that , , and . Hence we can reformulate problem (1) as a nonlinear system of equation where the nonsmooth mapping is defined by where . For convenience, we write as , and . We can now associate this system with its natural merit function; that is, so that solving (1) is equivalent to find a global solution of the problem

We remark that, in order to find a solution of (1), one has to seek global solutions of (8), while usual unconstrained minimization algorithm will compute the derivatives of , such as Newton’s method or quasi-Newton’s approach. But in many cases, the derivatives of are not available for the nonsmooth function . For this case, there are some so-called derivative-free methods [3, 4] appeared to avoid computing the derivatives of . But they always demand that is a monotone function. In this paper, we use decomposition technique to decompose the function into a smooth part and a nonsmooth part; moreover, we have no demand on the monotonicity assumption on . Also, integer with the trust region filter technique, we present a new algorithm to solve (8) and find that any accumulation point of the sequence generated by the algorithm is a solution of (1).

This paper is organized as follows. In Section 2, we review some definitions and preliminary results that will be used in the latter sections. The algorithm is presented in Section 3. In Section 4, the global convergence theory is proved. The numerical results are reported in the last section.

#### 2. Preliminaries

In this section, we recall some definitions and preliminary results about decomposition of NCP function and the filter algorithm, which will be used in the sequential analysis.

##### 2.1. Decomposition of NCP Function

If is continuous and local Lipschitzian, then the -subdifferential of at is where denotes the set of points at which is differential.

The Clarke’s generalized Jacobian of at is defined by

*Definition 1. * Let be a local Lipschitzian function. If for all , it holds , then function is called semismooth at .

Lemma 2. * Suppose , and is semismooth at . Then for all , there hold*(i) *;*(ii) *,
**
where is called directional derivative of at in the direction .*

Lemma 3 (see [7]). * If is an mapping, then
*

Lemma 4. * There exist constants , such that
*

*Proof. * This follows immediately from Lemma 3.1 of Tseng [8].

In smooth case, for solving the nonlinear equation (5), the Levenberg-Marquardt method can be viewed as a method for generating a sequence of iterates where the step between iterates is a solution to the problem
for some bound . The norm denotes the -norm.

But in the nonsmooth case, may not exist at some special points. However, in many cases, one may decompose the nonsmooth function into , where is smooth and is nonsmooth, while is relatively small compared to function . We call such a decomposition a smooth plus nonsmooth (SPN) decomposition. In a certain sense, can be regarded as the perturbation to the system. We now use
to replace (13).

*Definition 5. * We say that is a regular SPN decomposition of if and only if is smooth, and for any , it holds
as long as .

*Remark 6. * In fact, for some given , define function by
Let
So, we can see
Then it is easy to see that obtained by the previous decomposition is continuously differential, while is nondifferential, and it holds

##### 2.2. Filter Algorithm

Filter algorithms are efficient algorithms for nonlinear programming without a penalty function [9, 10]. Recently, filter technique has been extended to solve nonlinear equations and nonlinear least square [11]. In this paper, it will be used to find a solution to nonlinear complementarity problem.

As traditional filter technique, we define the objective function and violation constrained function as follows:

The trial step should either reduce the value of the constraint violation function or the objective value of the function . To ensure sufficient decrease of at least one of two criteria, we say that a point dominates a point whenever for , where , . We thus aim to accept a new iterate only if it is not dominated by any other iterate in the filter.

A filter set is a set of points in , such that no point dominates any other.

In practical computation, a trial point is acceptable to the filter if and only if for all , where is a small positive constant and , .

As the algorithm progresses, we may want to add to the filter. If an iteration is acceptable for filter , we do this by adding the point to the filter and removing those satisfying We also refer to this operation as “adding to the filter.”

#### 3. An Improved Filter Algorithm for NCPs

In this section, we will present a decomposition filter method for the nonlinear complementarity problem and prove that it is well defined.

*Algorithm A*

*Step 1. *Choose , the initial decomposition parameter , , , , , and ; set .

*Step 2. *Decompose into . If , then stop.

*Step 3. *Solve (14) to get . Compute .

*Step 4. * If , go to Step 6. Or else, go to Step 5.

*Step 5. * If is acceptable for the filter , add to the filter, and go to Step 6. Otherwise, , , , , and go to Step 3 (inner loop).

*Step 6. *, , , , and go to Step 2 (outer loop).

*Remark 7. * Algorithm A is a trust-region-type filter method combined with a decomposition technique for the nonlinear complementarity problem.

Throughout this paper, we always assume that following conditions hold.*Assumptions* (A1) is an function, and is local Lipschitzian. (A2) The sequences and remain in a closed, bounded convex subset .

Lemma 8. * The inner loop can terminate in finite times.*

* Proof. * Suppose by contradiction that it cannot terminate finitely, then it holds , consequently, .

By the definition of and , we have

By and Assumption (A2), we have for sufficiently large. Then, it holds
Hence
which is implied that there exists a constant , such that for sufficiently large. The result follows.

#### 4. Global Convergence Property

In this section, we will give the global property of Algorithm A.

Lemma 9. * Suppose that there are infinite many points entered into the filter , then ; that is, any accumulation point of is a solution to the nonlinear complementarity problem.*

* Proof. * Suppose index the subsequence of iteration at which is added to the filter . Now suppose by contradiction that there exist a constant and a subsequence , such that

From Assumption (A2), we have

By the definition of , is acceptable for the filter, which implies that

Together with (29) and the definition of , we deduce that there exists a constant , such that .

Then by (30), it holds

Let , and it is easy to see that
which is a contradiction. Hence

Consider now any , and let be the last iteration before , such that was added to the filter. By the construction of Algorithm A, if is not included in the filter, it must result in the decrease of the objective function . Hence for all , it holds
which follows . Moreover by Lemma 4, we have
Therefore .

Lemma 10. * Suppose that there are finite many points entered into the filter . Then any accumulation point of is a solution of the NCP.*

* Proof. * By Assumption (A2), we know that has at least one accumulation point . Suppose by contradiction that is not the solution to the nonlinear complementarity problem, then there exist and such that for . Then by Definition 5, we have for . If there are finite many points entered into , then by Lemma 8, it must exist a constant , such that for . Moreover, by the construction of Algorithm A, there also exists a constant , such that

Together with (25), we have

So there exists , such that the sequence is decreasing monotonically for . In other hand, it is below bounded by Assumption (A2). Hence, we have for . Consequently, it follows
which contradicts to (36). The desired conclusion holds.

Theorem 11. * Suppose that Assumptions (A1)-(A2) hold. Let be the iterate sequence generated by Algorithm A, then any accumulation point of is a solution to the nonlinear complementarity problem.*

* Proof. * It is natural by the previous results.

#### 5. A Numerical Example

In this section, we give a numerical example to test Algorithm A. We use the example as following.

*Example 12. *One has

This problem has one nondegenerate solution and one degenerate solution .

We choose initial point , where is a random number. Also, we choose , , , and . With different , Figure 1 shows the change of objective function corresponding to the change of iteration in Algorithm A presented in this paper.

In order to show the good numerical results of Algorithm A, we compare Algorithm A with the traditional method that without filter technique (see Table 1).

In Table 1, denotes the iteration number, and CPU denotes the cpu’s time in computing.

From Figure 1 and Table 1, we can see that Algorithm A is better than traditional algorithm that without filter technique whether from the iteration times or CPU time. Since we use filter technique in Algorithm A, the objective function fluctuates to a certain degree, but there exists , such that is also monotone for . Just by the filter technique, we have less iteration times and shorter CPU time compared to traditional method without filter technique. Hence, the decomposition filter method is effective.

#### Acknowledgments

The author would like to thank the anonymous referee, whose constructive comments led to a considerable revision of the original paper. This research is supported by the National Natural Science Foundation of China (no. 11101115), the Natural Science Foundation of Hebei Province (nos. A2010000191, A2011201053).