Abstract

Based on the smoothing function of penalized Fischer-Burmeister NCP-function, we propose a new smoothing inexact Newton algorithm with non-monotone line search for solving the generalized nonlinear complementarity problem. We view the smoothing parameter as an independent variable. Under suitable conditions, we show that any accumulation point of the generated sequence is a solution of the generalized nonlinear complementarity problem. We also establish the local superlinear (quadratic) convergence of the proposed algorithm under the BD-regular assumption. Preliminary numerical experiments indicate the feasibility and efficiency of the proposed algorithm.

1. Introduction

Consider the generalized nonlinear complementarity problem, denoted by GNCP (), which is to find a vector such that where are continuously differentiable mappings. is a nonempty closed convex cone in and denotes its polar cone.

GNCP () finds important applications in many fields, such as engineering and economics, and is a wide class of problems that contains the classical nonlinear complementarity problem (abbreviated as NCP); see [13] and references therein. To solve it, one usually reformulates it as a minimization problem over a simple set or an unconstrained optimization problem; see [4] for the case that is a general cone, and see [3, 5] for the case that . The conditions under which a stationary point of the reformulated optimization is a solution of the GNCP () were also provided in the literature.

In this paper, we consider the GNCP () for the case that , and is a polyhedral cone in ; that is, there exist and such that It is easy to verify that its polar cone has the following representation:

In the following, the GNCP () is specialized over a polyhedral cone, and in the subsequent analysis, we abbreviate it as GNCP for simplicity. In [1], Andreani et al. reformulated the problem as a smooth optimization problem with simple constraints and presented the sufficient conditions under which a stationary point of the optimization problem is a solution of the concerned problem. Wang et al. [6] reformulated the problem as a system of nonlinear and nonsmooth equations and proposed a nonsmooth L-M method to solve this problem and proved that the algorithm is both globally and superlinearly convergent under mild assumptions. Zhang et al. [7] rearranged the GNCP over a polyhedral as a smoothing system of equations, then developed a smoothing Newton-type method for solving it, and proved that their method has local superlinear (quadratic) convergence under certain conditions. There are a lot of computations to decide whether the linear system is solvable or not, which is in Step 2 of the algorithm presented in [7]. The inexact approach is one way to overcome this difficulty. Inexact Newton methods have been proposed for solving NCP [810]. Their main idea is to solve the linear system only approximately. It seems reasonable to ask if this kind of method can be applied to the GNCP, and this actually constitutes the main motivation of this paper. In this paper, we propose a new smoothing inexact Newton algorithm with non-monotone line search for solving GNCP by using a new type of smoothing function. We view the smoothing parameter as an independent variable. The forcing parameter of inexact Newton method links the norm of residual vector to the norm of mapping at the current iterate. Under suitable conditions, we show that any accumulation point of the generated sequence is a solution of the GNCP, and we also establish the local superlinear (quadratic) convergence of the proposed algorithm under the BD-regular assumption. Some numerical examples indicate the feasibility and efficiency of the proposed algorithm.

The rest of this paper is organized as follows. In Section 2, we show some preliminaries. Stationary point and nonsingularity conditions of the GNCP are discussed in Section 3. In Section 4, we propose a kind of algorithm for solving GNCP and obtain the convergence properties of this kind of algorithm. Numerical experiments are exhibited in Section 5 and the conclusion is stated in the last section.

At the end of this section, we indicate some standard notions used in this paper. For a continuously differentiable function , we denote the Jacobian of at by , whereas the transposed Jacobian is denoted by . In particular, if , is a column vector. We use to denote the inner product of vectors , and use or to denote the th component of the vector . The null space of a matrix is denoted by .

2. Preliminaries

In this section, we first review some definitions and basic results.

Definition 2.1. A matrix is said to be a -matrix if every principle minor of is nonnegative.

Definition 2.2. A function is said to be a -function if for all with , there exists an index such that
For vector , we use for diag. For a -matrix, the following conclusion holds [11].

Lemma 2.3. If is a -matrix, then every matrix of the form is nonsingular for all positive definite diagonal matrices .

Definition 2.4. Suppose that is locally Lipschitz function. is said to be semismooth at if is directionally differentiable at and exist for any , where denotes the generalized derivative in [12].

Definition 2.5. Suppose that is locally Lipschitz function. is said to be strongly semismooth at if is semismooth at and for any and , it holds that
The concept of semismoothness was originally introduced by Mifflin for functionals [13]. Qi and Sun extended the definition of semismooth function to vector-valued functions [14]. Convex functions, smooth functions, and piecewise linear functions are examples of semismooth functions. A function is semismooth at if and only if all its component functions are semismooth. The composition of semismooth functions is still a semismooth function.

Lemma 2.6 (see [14]). Suppose that is a locally Lipschitz function and semismooth at . Then(a)for any ,(b) for any ,

In [6], Wang et al. reformulated the GNCP as a system of nonlinear equations based on the following Fischer function [15] for . In [7], Zhang et al. established a type of smoothing reformulation of GNCP based on the following smoothing approximation function to the Fischer function for , where is a constant.

In this paper, we use the following smoothing approximation function: of the penalized Fischer-Burmeister NCP-function: where , and . This NCP-function has turned out to have stronger theoretical properties than the widely used Fischer-Burmeister function and other NCP-functions suggested previously (see [16]). The latter term penalizes violations of the complementarity condition and plays a significant role from both a theoretical and a practical point of view.

It is easy to see that the following Lemma is true from [6].

Lemma 2.7. is a solution of the GNCP if and only if there exist and , such that where for .

Based on the relation between and , we can establish the following smoothing function to the GNCP.

Denote

For simplicity, we let and and denote

From Lemma 2.7, we can see that is a solution of the GNCP if and only if there exist such that is a global minimizer with zero objective function value of the unconstrained optimization problem:

By simple calculation, we can see that the following lemma is true.

Lemma 2.8. (1) The function is continuously differentiable for and it holds that
where
(2) is semismooth on and is strongly semismooth on if and are both Lipschitz continuous on .
(3) is continuously differentiable on with for any and is continuously differentiable with for any .

3. Stationary Point and Nonsingularity Conditions

Generally, for an optimization problem, we can obtain its stationary point when we use the existing optimization methods to solve it. Now we should study how to guarantee that every stationary point of (2.12) is a solution of the GNCP. In the following, we will discuss the conditions.

Theorem 3.1. Let be a stationary point of (2.12). If and is positive definite in , then is a solution of the GNCP.

Proof. Define Since is a stationary point of (2.12), one has From Lemma 2.8, we have that is,
From the forth equation of (3.4), we have . Premultiplying the second equation of (3.4) by , one has
Combining (3.5) with the third and forth equations of (3.4), we obtain
From Lemma 2.8, we have that is positive definite. Since from the forth equation of (3.4), together with the positive definiteness of in , we have
Subtituting into the first equation of (3.4), we have
Together the second equation of (3.4) with (3.8) and (3.7), we have
Since is nonsingular, premultiplying (3.9) by , one has Hence, . We complete the proof.

Theorem 3.2. Let be a stationary point of (2.12), , has full row rank, and is positive definite, then is nonsingular for any .

Proof. By Lemma 2.8, we know that any element can be written as where , and are defined in Lemma 2.8.
In order to complete the proof, we only need to prove that is nonsingular.
That is to say, if and only if is nonsingular.
Suppose that which means that
According to the first equation of (3.15), we have Combining (3.16) with the third equation of (3.15), one has Premultiplying by and recalling the second equation of (3.15), we get By using the assumption that is positive, we obtain which, combining with (3.16), gives By using the assumption that has full row rank and the third equation of (3.15), we obtain which completes the proof.

4. Algorithm and Convergence Property

In this section, we formally present our smoothing inexact Newton-type algorithm with nonmontone line search for solving by using the smoothing penalized FB function . This nonmontone line search method was used to solve the NCP problem in [17]. Furthermore, we show the local superlinear (quadratic) convergence properties of the algorithm.

Algorithm 4.1. Step 1. Take constants , and such that . Choose .
Let and be an arbitary point.
Let ,, and . Let and be two constants such that . Choose . Set and .
Step 2. If , then stop; otherwise, let .Step 3. Compute by Step 4. Let , where is the smallest nonnegative integer such that Step 5. Set , , and Step 6. Choose ; set
Go to Step 2.

From Algorithm 4.1 and [17], it is easy to see that the following remark is true.

Remark 4.2. Let the sequence and be generated by Algorithm 4.1.(i) for any .(ii) for any .(iii) for any .(iv) for any .(v) and for any .

Theorem 4.3. Suppose that is a sequence generated by Algorithm 4.1, has full row rank, and is positive definite, . Then Algorithm 4.1 is well defined.

Proof. From Theorem 3.2, we know that is nonsingular. Hence, Step 3 is well defined at the kth iteration.
In the following, we show that the Step 4 is well defined. For , we let and then
If , then , which, together with (4.3), implies that If , then . Combining with (4.3), we have As a result, for any ,
From (4.6) and (4.9), one has
According to Lemma 2.8, we know that is continuously differentiable. Hence, there exists such that for any , and which completes the proof.

Theorem 4.4. Suppose that is an infinite sequence generated by Algorithm 4.1. Then the arbitrary accumulation point of the sequence is a stationary point of (2.12).

Proof. (i) Suppose that is an arbitrary accumulation point of , then there exists an infinite subsequence such that as and . Without loss of generality, we assume that . That is, According to Remark 4.2 (i), (iii), and (v), we obtain that the sequence , and are convergent. Without loss of generality, we assume that
It is easy to see that , and . By Remark 4.2 (i) and (ii), we know that which means that is bounded.
In the following, suppose that . Without loss of generality, we assume that is convergent and denote
Combining (4.3) with the assumption , we have . Furthermore, by using (4.14) and Remark 4.2 (iv), we obtain that and . Hence, we can deduce that
We now break up the proof of (i) into two cases.
(1) Assume that for all , where is a constant. In this case, by (4.4) and (4.2), it follows that for any , which, together with the boundness of , yields
On the other hand, by and the definition of given in (4.4), we have for any .
As a result, we obtain that , which is a contradiction with .
(2) Assume that . The stepsize does not satisfy the line search condition (4.2) for any sufficiently large , that is, holds for sufficiently large . Since , the aforementioned inequality becomes
Since and is continuously differentiable at , from Lemma 2.8, we have where the first equality holds from (4.4) (in the form of limit), the first inequality holds from (4.22) (in the form of limit); the second equality holds from (4.1) (in the form of limit), and the third inequality holds from by using (4.17). Hence it follows from (4.23) and that , which contradicts the fact that and . We complete the proof.

Similar to the proof of Theorem  4.3 in [7], the following theorem holds.

Theorem 4.5. Suppose that is an infinite sequence generated by Algorithm 4.1, is the arbitrary accumulation point of the sequence , and is a BD-regular solution of . Then(1)the point is a solution of the GNCP;(2)the sequence converges to superlinearly. In particular, if and are locally Lipschitz continuous at , then converges to Q-quadratically.

5. Numerical Experiments

In this section, we implement Algorithm 4.1 for solving some GNCPs in order to see the behavior of Algorithm 4.1. The parameters used in the algorithm are chosen as follows:

The following problems are tested in [57].

Example 5.1. Consider the implicit complementarity problems with the following form: find such that where , and and with being twice continuously differentiable. The following choices of function define our test problems.(1), (2).

The numerical results are shown in Tables 1, 2, and 3 with the following four kinds of initial points:

(a) , (b),

(c) , (d).

In Tables 1, 2, and 3, ST denotes initial point, IT is the iterative number, TV is the final value of T when the algorithm terminates, and CPU denotes the computing time in the computer, respectively. For the starting point, we choose and the termination criterion for the Algorithm is .

From Tables 1, 2, and 3, we can see that Algorithm 4.1 is efficient in solving this kind of problem.

In the following, we will compare Algorithm 4.1 (denoted by Inexact) with the exact Newton algorithm with nonmontone line search (denoted by Exact). Here, our test problem is the previous problem (1), that is,

The initial point is or . The numerical results are shown in Table 4.

From Table 4, we can see that Algorithm 4.1 is prior to the exact smoothing Newton method when the GNCP is of relatively large scale.

6. Conclusion

In this paper, we combine the smoothing function of penalized Fischer-Burmeister NCP-function with nonmontone line search in [17] to present a new smoothing algorithm to solve the generalized nonlinear complementarity problem. We obtain that the iteration sequence generated by Algorithm 4.1 converges to a solution of the generalized nonlinear complementarity problem locally superlinear (quadratic). Preliminary numerical results show the efficiency of the algorithm.

Acknowledgments

The authors are gratefully indebted to the anonymous referees for their valuable suggestions and remarks that allowed them to essentially improve the original presentation of the paper. This paper is supported by the National Nature Science Foundation of China (Grant nos. 10901096 and 10971118) and Shandong Provincial Natural Science Foundation, China (Grant no. ZR2009AL019) and the Project of Shandong Province Higher Educational Science and Technology Program (Grant no. J09LA53).