• Views 430
• Citations 3
• ePub 31
• PDF 625
`Journal of Applied MathematicsVolume 2013, Article ID 671402, 6 pageshttp://dx.doi.org/10.1155/2013/671402`
Research Article

A Superlinearly Convergent Method for the Generalized Complementarity Problem over a Polyhedral Cone

1School of Management Science, Qufu Normal University, Rizhao, Shandong 276800, China
2Software Center, Northeastern University, Shenyang, Liaoning 110004, China
3College of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110004, China

Received 24 April 2013; Accepted 22 July 2013

Copyright © 2013 Fengming Ma et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Making use of a smoothing NCP-function, we formulate the generalized complementarity problem (GCP) over a polyhedral cone as an equivalent system of equations. Then we present a Newton-type method for the equivalent system to obtain a solution of the GCP. Our method solves only one linear system of equations and performs only one line search at each iteration. Under mild assumptions, we show that our method is both globally and superlinearly convergent. Compared to the previous literatures, our method has stronger convergence results under weaker conditions.

1. Introduction

The generalized complementarity problem, denoted by GCP, is to find a vector such that where and are continuous functions from to , is a nonempty closed convex cone in , and denotes the polar cone of .

This problem has many interesting applications and its solution using special techniques has been considered extensively in the literature. See [13] and references therein. In particular, if and , then the GCP reduces to the classical nonlinear complementarity problem [4]. Furthermore, the GCP is closely related to the variational inequality problem in the sense that is a solution of GCP if and only if is a solution of if is invertible (see Lemma  6 in [5]).

To solve the GCP, one usually reformulates it as a minimization problem over a simple set or an unconstrained optimization problem; see [3] for the case that is a general cone and see [1, 2] for the case that . The conditions under which a stationary point of the reformulated optimization is a solution of GCP were provided in this literature.

In this paper, we consider the case that , and are both continuously differentiable on , and is a polyhedral cone in ; that is, there exist , such that where and are both positive integers. It is easy to verify that its polar cone has the following representation:

Obviously, if is an identity matrix and , then this version of the GCP reduces to the case considered in [2].

From now on, the GCP is specialized over a polyhedral cone.

For the GCP, Wang et al. and Zhang et al. [6, 7], respectively, established some constrained or unconstrained optimization reformulation for the case and are both linear functions and gave some Newton-type method for the problem. Later, Ma et at. [8] established a potential reduction method for the same case.

In this paper, we give a new smoothing Newton method for solving the GCP based on a smoothing NCP-function. This method is proved to be convergent globally and superlinearly under suitable assumptions. Furthermore, the method needs only to solve one linear system of equations and perform one line search per iteration.

The rest of this paper is organized as follows. In Section 2, we give some preliminaries and results of a smoothing NCP-function based on the Fischer-Burmeister function. In Section 3, we present a one-step smoothing Newton method for the GCP and state some preliminary results. In Section 4, we establish the global and superlinear convergence of the proposed method. Conclusions are given in Section 5.

To end this section, we will give some notations used in this paper. All vectors are column ones. The inner product of vectors is denoted by . For convenience, vector is denoted by , where are both column vectors. Let denote the Euclidean norm of a vector or a matrix. For vector , denotes the diagonal matrix with the th diagonal element being . Vector denotes the vector of all ones whose dimension is defined by the context of its use.

2. Preliminaries

To establish our method for the solution of the GCP, we now formulate the GCP as a system of equations via the following smoothing NCP-function based on the Fischer-Burmeister function [9]: where is a smoothing parameter. By simple calculation, we have

The smoothing NCP-function possesses a few nice properties [10].

Lemma 1. The NCP-function defined by (4) has the following properties:(i)for any , one has where , a basic property of which is that ,,;(ii)for any , one has ;(iii)for any , one has ,;(iv)let , be any two sequences such that or or . Then, for any , one has .

From Lemma 1, we can find that the function is indeed a smoothing approximation function of the Fischer-Burmeister function .

For the GCP, we have the following equivalent statements:is a solution of the GCP,,, and , there exist,, such that,,,,, there exist,, such that,,,,.

Let and define a vector-valued function as follows: where

Then, by Lemma 1, the following result is straightforward.

Theorem 2. is a solution of the if and only if there exist , such that

Theorem 2 means that the is equivalent to an equation system in the sense that their solution sets are coincident.

By (5)–(7), it is not difficult to see that the Jacobian of has the following form: where

And for the functions and , they have the following nice properties.

Lemma 3. Let and be defined by (7) and (8), respectively. Then(i) is continuously differentiable at any ,(ii) is continuously differentiable at any with its Jacobian being defined by (10). If matrix has full row rank and is positive definite, then is nonsingular on .

Proof. From Lemma 1, it is not difficult to see that is continuously differentiable at any .
Next we prove (ii). It follows from the definition of and (i) that is continuously differentiable on . And by simple calculation, we obtain the Jacobian defined by (10). Now we prove the nonsingularity of .
Since , it follows that . And thus we know that to prove the nonsingularity of , it is sufficient to prove the nonsingularity of .
Since is nonsingular, we make the following transformation to without changing its rank: Since is nonsingular, to prove the nonsingularity of , it suffices to show the nonsingularity of the following submatrix of it: Let where ,. Then we obtain, That is, Since is positive definite and is semipositive definite, it holds that is also positive definite. Premultiplying (16) by yields Noting that is positive definite and has full row rank, we obtain , and thus . Hence, we complete the proof.

3. Algorithm and Preliminaries

In this section we propose our smoothing Newton method for the .

Let . Define the following real-valued functions by And for convenience, let in what follows.

Algorithm 4. Consider the following.

Step 1. Choose parameters ,,, and such that . Let and let be an arbitrary initial point. Set .

Step 2. If , stop; otherwise, compute .

Step 3. Compute by

Step 4. Let , where is the smallest nonnegative integer such that

Step 5. Let ,; go to Step 2.

The following theorem shows that Algorithm 4 is well defined and generates an infinite sequence with some good features. Define the set

Theorem 5. Let be given in Algorithm 4. Then Algorithm 4 is well defined and generates an infinite sequence with and for all .

Proof . We firstly prove that Algorithm 4 is well defined. Obviously, if for all , it follows from Lemma 3(ii) that the matrix is nonsingular. So, Step 3 of Algorithm 4 is well defined at the th iteration. Now we prove that by mathematical induction on .
Obviously, . Assume for ; next we prove .
By Step 3, we have where the last inequality follows from for all . Therefore, for any we obtain that which means that, for , . And thus we prove the desired result.
Next we prove that Step 4 is well defined. Let From Lemma 3 we can know that is continuously differentiable around . Hence, (24) implies that So, for any , we obtain from (19), (24), and (25) that where the first inequality follows from the fact that , and the second inequality follows from the fact that . This indicates that Step 4 of Algorithm 4 is well defined at the th iteration.
Now we prove the second part of conclusion; that is, for all . We prove this result also by mathematical induction on .
Obviously, from the definition of and the choice of , it holds that ; that is, . So, . Suppose that ; that is, . Then, by (22), we have By Step 4 and the definition of , we obtain So, if , then , and hence On the other hand, if , then Thus, from (27), we complete the proof.

4. Global and Superlinear Convergence

Theorem 6. Assume that the sequence is generated by Algorithm 4; then any accumulation point of is a solution of (9).

Proof. Without loss of generality, we assume that converges to . It follows from (20) that is monotonically decreasing. We assume that converges to . Then, to prove the result, it is sufficient to prove that , which will be proved by reduction to absurdity. Assume that ; then we obtain On the one hand, from Step 4 of Algorithm 4, it holds that That is, Let ; we obtain On the other hand, by (19) and the definitions of , , and , we have that Combining (34) and (35), we deduced that From the assumption that , it is implied that which contradicts the fact that ,. Thus, we have ; that is, . And this completes the proof.

Assumption 7. The solution set of is nonempty and bounded.

Theorem 8. Suppose that Assumption 7 is satisfied and is an accumulation point of the iteration sequence generated by Algorithm 4. If all are nonsingular, then(i), for all sufficiently close to ;(ii)the whole sequence converges to ; that is, ;(iii) converges to superlinearly; that is, . Moreover, .

Proof. It follows from Theorem 6 that and . Because all are nonsingular, it follows from Proposition  3.1 in [11] that, for all sufficiently close to , we have where is some constant. By Theorem  19 in [12], it is easy to see from Lemma 3 that is semismooth. Hence, for all sufficiently close to , we get On the other hand, the fact that is semismooth implies that is locally Lipschitz continuous near . Therefore, for all sufficiently close to , we have Thus, we obtain from (40) and the definition of that Then, by (38), (39), and (41), we have Similar to the proof of Theorem  3.1 in [13], for all sufficiently close to , we get Then, combining this with (42), since is semismooth at , for all sufficiently close to , we have By Theorem 6, . Hence, (44) implies that when is sufficiently close to , can satisfy (20), which proves (i). Thus, for all sufficiently close to , we have which together with (42) proves (ii) and Next, from (i), (ii), and (41), we obtain for all sufficiently large that where the third equality follows from the fact . Therefore, for all sufficiently close to , we obtain which completes the whole proof.

5. Conclusion

For the generalized complementarity problem over a polyhedral cone, we formulated it as a system of equations via modified smoothing NCP-function based on the Fischer-Burmeister function. Then we presented a Newton-type method to solve the equivalent system of equations. The proposed method is shown to be globally and superlinearly convergent under mild assumptions. Furthermore, our method solves only one linear system of equations and performs only one line search at each iteration. Compared to the previous literatures, our method has stronger convergence results under weaker conditions.

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China under Grants 11171180, 11271226, and 61100028, the Natural Science Foundation of Shandong Province under Grant ZR2012AL07, and by the Fundamental Research Funds for the Central Universities under Grant N110404017.

References

1. H. Jiang, M. Fukushima, L. Qi, and D. Sun, “A trust region method for solving generalized complementarity problems,” SIAM Journal on Optimization, vol. 8, no. 1, pp. 140–157, 1998.
2. C. Kanzow and M. Fukushima, “Equivalence of the generalized complementarity problem to differentiable unconstrained minimization,” Journal of Optimization Theory and Applications, vol. 90, no. 3, pp. 581–603, 1996.
3. P. Tseng, N. Yamashita, and M. Fukushima, “Equivalence of complementarity problems to differentiable minimization: a unified approach,” SIAM Journal on Optimization, vol. 6, no. 2, pp. 446–460, 1996.
4. F. Facchiner and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer, New York, NY, USA, 2003.
5. R. Andreani, A. Friedlander, and S. A. Santos, “On the resolution of the generalized nonlinear complementarity problem,” SIAM Journal on Optimization, vol. 12, no. 2, pp. 303–321, 2001.
6. Y. Wang, F. Ma, and J. Zhang, “A nonsmooth L-M method for solving the generalized nonlinear complementarity problem over a polyhedral cone,” Applied Mathematics and Optimization, vol. 52, no. 1, pp. 73–92, 2005.
7. X. Zhang, F. Ma, and Y. Wang, “A Newton-type algorithm for generalized linear complementarity problem over a polyhedral cone,” Applied Mathematics and Computation, vol. 169, no. 1, pp. 388–401, 2005.
8. F. Ma, Y. Wang, and H. Zhao, “A potential reduction method for the generalized linear complementarity problem over a polyhedral cone,” Journal of Industrial and Management Optimization, vol. 6, no. 1, pp. 259–267, 2010.
9. A. Fischer, “A special Newton-type optimization method,” Optimization., vol. 24, no. 3-4, pp. 269–284, 1992.
10. C. Ma and X. Chen, “The convergence of a one-step smoothing Newton method for ${P}_{0}$-NCP based on a new smoothing NCP-function,” Journal of Computational and Applied Mathematics, vol. 216, no. 1, pp. 1–13, 2008.
11. L. Q. Qi and J. Sun, “A nonsmooth version of Newton's method,” Mathematical Programming, vol. 58, no. 3, pp. 353–367, 1993.
12. A. Fischer, “Solution of monotone complementarity problems with locally Lipschitzian functions,” Mathematical Programming, vol. 76, no. 3, pp. 513–532, 1997.
13. L. Q. Qi, “Convergence analysis of some algorithms for solving nonsmooth equations,” Mathematics of Operations Research, vol. 18, no. 1, pp. 227–244, 1993.