Nonlinear Analysis: Optimization Methods, Convergence Theory, and Applications
View this Special IssueResearch Article  Open Access
A Smoothing Inexact Newton Method for Nonlinear Complementarity Problems
Abstract
A smoothing inexact Newton method is presented for solving nonlinear complementarity problems. Different from the existing exact methods, the associated subproblems are not necessary to be exactly solved to obtain the search directions. Under suitable assumptions, global convergence and superlinear convergence are established for the developed inexact algorithm, which are extensions of the exact case. On the one hand, results of numerical experiments indicate that our algorithm is effective for the benchmark test problems available in the literature. On the other hand, suitable choice of inexact parameters can improve the numerical performance of the developed algorithm.
1. Introduction
In the study of equilibria problems from economics, engineering, and management sciences, a complementarity problem (CP) often appears as the prominent mathematical model of the equilibria problems. Thus, it is the most practical interest to develop a robust and efficient algorithm for solving CP in the past decades (see the very recently published book [1] and the references therein). In this paper, we consider a nonlinear complementarity problem (denoted by NCP, for short): find a vector such that where is continuously differentiable function. Due to the extensive applications, NCP has attracted great attention of researchers (see, e.g., [2, 3] and the references therein). On the one hand, there have been many theoretical results on the existence of solutions and their structural properties. On the other hand, many attempts have been made to develop implementable algorithms for the solution of NCP.
A popular way to solve the NCP is to reformulate 1 to a nonsmooth equation via an NCPfunction. Function is said to be the NCPfunction if Define , given by Then, problem 1 is equivalent to Thus, any efficient algorithm for solving 4 can be directly applied to find the solution of problem 1.
Smoothing method is a fundamental approach to solve the nonsmooth equation 4. In this connection, one can see, for example, [4â€“16]. The basic idea of this method is to construct a smooth function to approximate . Let be a given smoothing parameter. We define a continuously differentiable function such that for any and there holds Then, problem 4 is approximated by a smooth equation: Let be a given positive sequence which tends to 0. Then, we can obtain an approximate solution of 4 by solving 6 with .
Recently, there are many different smoothing functions being employed to smooth the problem 4. Among them, the FischerBurmeiste function and the minimum function are two popular ones, which are defined by respectively. With the norm of in the FischerBurmeiste function being replaced by a more general norm with , Chen and Pan proposed a family of new NCPfunction in [6]. By combining the FischerBurmeiste function and the minimum function, Liu and Wu presented a smoothing function in [11] as follows: In [13], a symmetric perturbed FischerBurmeister function is constructed: Very recetly, in [15], a more general smoothing function with the norm () was presented. It is shown that for the nonmonotone smoothing Newton algorithm developed in [14] the numerical performance of algorithm is greatly improved if .
In this paper, we first write 8 as then, we intend to investigate a new smoothing method of , and in virtue of this new method, we will design a smoothing inexact Newton algorithm to solve the obtained smooth equations. Since an inexact parameter at each iteration is admissive to obtain an inexact Newton search direction, the developed algorithm is more helpful to numerical computation than the similar ones available in the literature. By suitably choosing a sequence of inexact parameters in advance, numerical performance of the developed algorithm in this paper can be improved. On the other hand, without the assumption of strict complementarity, we can establish the theory of convergence for our algorithm, which is weaker than that in the existing results.
The rest of this paper is organized as follows. In Section 2, we study a smoothing method of the absolute function. Section 3 is devoted to development of a smoothing inexact Newton algorithm to solve the nonlinear complementarity problem. In Section 4, the global convergence and the superlinear convergence are established. Numerical results are reported in Section 5. Some final remarks are made in Section 6.
The following notions will be used throughout this paper. For any vector or matrix , denotes the transposition of . denotes the space of dimensional real vectors. and denote the nonnegative and the positive subspaces in , respectively. For any vector , denotes a diagonal matrix, whose th diagonal element is and the vector , represents the set . represents the identity matrix with a suitable dimension. stands for the 2norm. For any , and represent that is uniformly bounded and that tends to zero as , respectively.
2. Smoothing the Absolute Function
In this section, we will study a smoothing method of the absolute function.
We first present a function , given by It is clear that
Note that the generalized derivative of the absolution function is calculated by
We can conclude that, except for , is a good approximation to the generalized derivative of with a sufficiently small . Actually, the following result was proved in [17].
Proposition 1. For any given constant , there is a constant independent of and such that
By Proposition 1, to obtain an approximation of , we calculate the integral of : Then, it is natural that we use to approximate . Actually, we have the following result (see [17]).
Proposition 2. (1) For any , it holds that
The above inequality holds strictly for all .
(2) For any , .
(3) , where and denotes the distance from the point to the set .
3. A Smoothing Inexact Newton Algorithm for NCP
In this section, we will develop a smoothing inexact Newton algorithm for solving a smooth equation obtained by reformulating the NCP().
Since we construct an approximation of by Proposition 2, defined by
Define , given by Then, is approximated by a smooth equation:
Remark 3. The above smoothing method has been used to deal with NCP() in [17]. Then, by solving a generalized Newton equation: so as to obtain a search direction at iteration for the developed algorithm in [17]. Different from the standard Newton method, is employed to replace in 22.
Taking into account the advantage of the standard smoothing Newton method (see, e.g., [12, 15, 16, 18]) in adjusting the the value of smoothing parameter automatically, we further transform problem 21 into a smooth optimization problem.
Denote . We define a function by with . Then, corresponding to any solution ofâ€‰â€‰21, is an optimal solution of the following minimization problem: Conversely, if is an optimal solution of problem 24 with , then, solves the system of 21.
Next, we focus on developing an efficient algorithm to solve problem 24. Before presentation of such an algorithm, we further investigate the properties of problem 24. The following definitions are useful to describe the properties of .
Definition 4. A matrix is said to be a matrix if all principal minors of are nonnegative.
Definition 5. A function is said to be a function if for all with , there holds that
Definition 6 (see [19, 20]). Suppose that is a locally Lipschitz continuous function, which has the generalized Jacobian in the sense of Clarke [21], it is said to be semismooth (or strongly semismooth) at a point if and only if for any , as ,
We now prove the following results.
Lemma 7. Let and be defined by 23. Then, consider the following:(i) is continuously differentiable at any with its Jacobian matrix â€‰where â€‰Furthermore, if is a function, then, is nonsingular for any .(ii) is locally Lipschitz continuous and semismooth on .
Proof. (i) Since is continuous differentiable at any , then is continuous differentiable. For any , by straightforward calculation, it yields 27 from the definition of .
Note that, for all , . It is clear that and are two positive diagonal matrices. Since is a function, is also a matrix for all . Thus, the principal minors of the matrix are nonnegative. By Definition 4, we know that the matrix is a matrix. From Theoremâ€‰â€‰3.3 in [7], it follows that the matrix is nonsingular. Then, it is concluded that the matrix is nonsingular.
(ii) It is clear that is locally Lipschitz continuous and semismooth on . The proof is completed.
With the properties of in Lemma 7, we first present an algorithm to solve problem 24 similar to the idea in [18, 22â€“25].
Algorithm 8 (a smoothing inexact Newton method). â€‰
Step 0. Choose constants , , , such that . Given an initial point , choose a sequence such that . Set and .
Step 1. If , then the algorithm stops. Otherwise, compute
Step 2. Compute by
Step 3. Set , where is the smallest nonnegative integer such that
Step 4. Set and . Return to Step 1.
Remark 9. Similar to the idea in [26], we develop Algorithm 8 by incorporating an inexact parameter at each iteration to obtain an inexact Newton direction of search in 30. Generally, we choose a sequence in advance, such that . Suitable choice of can be used to improve the numerical performance of Algorithm 8 by generating an inexact Newton direction in Step 2 of Algorithm 8. The difference between Algorithm 8 and that developed in [26] lies in the distinct smoothing method. In [26], instead of the smoothing function 19, the FischerBurmeister function is adopted.
On the other hand, without the assumption of strict complementarity, we will establish the theory of global and local superlinearly convergences for Algorithm 8 in Section 4 under weaker conditions than the existing results.
If , then, Algorithm 8 reduces to a smoothing Newton algorithm, which is similar to that developed in [18]. However, the definition of in this paper is different from that in [18].
Denote
The following result shows that Algorithm 8 is welldefined.
Theorem 10. Suppose that is a continuous differentiable function. (1)For the system of linear equations 30 in the unknown variable , there exists a unique solution.(2)In finitely many backtracking steps, in Step 3 of Algorithm 8 is obtained to satisfy 31.(3)Let be the sequence generated by Algorithm 8. Then, for all , .
Proof. We prove the first result.
Since is a continuously differentiable function, it follows from Lemma 7 that the matrix is nonsingular at as . It implies that the system of linear equations 30 in the unknown variable has a unique solution. Thus, Step 2 of Algorithm 8 is welldefined.
We now prove the second result.
By 30, we have
From the definitions of and , it follows that, for all ,
Thus, for any
Denote
Since is continuous differentiable at , then ; we conclude from 36 that
It yields
Since , there exists a constant such that, for any and , there holds that
This demonstrates that Step 3 of Algorithm 8 is welldefined at each iteration.
Finally, we prove for all .
It is clear that . In other words, . Suppose that as . Then, . By 31, we get ; then . By 33, we have
The last inequality implies that the desired result holds for . By mathematical induction method, we concluded that for all .
We have completed the proof of Theorem 10.
Remark 11. By Theorem 10, we know that Algorithm 8 is welldefined, and either it stops in finitely many steps or generates an infinite sequence with and for all . In the subsequent section, we will analyze the convergence of this sequence.
4. Convergence
In this section, we will establish the global convergence and the superlinear convergence for Algorithm 8.
We first prove the following result.
Lemma 12. Let be defined by 20. If is a function, then, for any , is coercive in . That is,
Proof. As , there exists a vector sequence which is unbounded. Then, there is a component such that is unbounded.
Define an index set . Then, is a nonempty set. Without loss of generality, we assume that , for all .
Let the sequence be defined by
Then, it is clear that is bounded. Since is a function, by Definition 5, we have
where is one of the indices at which the max is attained. Since , and can be supposed to be independent of , we know as .
Next, we continue the proof in the following six directions.
Case 1 ( and ). Since is bounded by the continuity of and the definition of , we know that from 43. Thus,
It yields
Case 2 ( and ). It is clear that
In virtue of
we obtain
Case 3 ( and ). In the same reason as in Case 1, . Thus,
It yields
Case 4 ( and ). Similar to Case , we can obtain
Case 5 ( and is bounded). On the one hand, it is clear that
is bounded. On the other hand, and is bounded; we know . Thus, + . It yields
Case 6 ( and is bounded). Similar to Case , it is easy to prove that .
The proof is completed.
Remark 13. By Lemma 12, we can remove the assumption that the level set of the merit function is bounded. In addition, different from [13, 22, 27], the result of Lemma 12 is obtained in this paper for the nonsymmetric smoothing function.
Before statement of main results, we need the following assumption.
Assumption 14. The solution set of NCPâ€‰â€‰1 is nonempty and bounded.
Remark 15. Assumption 14 is a relatively weak condition to ensure the convergence of Algorithm 8. For example, in [26], it is assumed that the level sets
are bounded to prove the convergence of algorithm. Up to our knowledge, for the FischerBurmeister smoothing function, 54 is proved to be true under the condition that in NCPâ€‰â€‰1 is a uniform function.
However, with our smoothing method, we can prove that 54 holds. Since the proof is only involved with the condition that is a function, Assumption 14 is weaker than that in [26].
With Lemma 12 and Assumption 14, we are now in a position to establish the convergence theory for Algorithm 8.
Theorem 16. Let be the iteration sequence generated by Algorithm 8. Under Assumption 14, the following statements are true. (i) and generated by Algorithm 8 are two monotonically decreasing and bounded sequences, whose limits are 0.(ii)Any accumulation point of is a solution ofâ€‰â€‰24.(iii)Under Assumption 14, has at least one accumulation point with and .
Proof. (i) From Steps 2 and 3 of Algorithm 8, it is clear that , , and are three monotonically decreasing and bounded sequences.
(ii) By Lemma 12, we conclude that the sequence is bounded. Then, without loss of generality, we suppose that as , there exists such that
If , then, by the definition of , and . From Lemma 7, it follows that is nonsingular. Thus, there exist a closed neighborhood and a constant , such that, for any and nonnegative integer satisfying , the following inequality holds true:
If is large enough such that and , then,
Therefore, as , it follows from that
It contradicts . We conclude that and .
(iii) By Assumption 14, we know that is nonempty and bounded. Thus, has at least one accumulation point with and .
Theorem 17. Suppose that Assumption 14 is satisfied and is an accumulation point of the sequence generated by Algorithm 8. If all are nonsingular, then, converges to superlinearly; that is, . Moreover, .
Proof. By Theorem 16, we have and . Because all are nonsingular, it follows that for all sufficiently close to , From Lemma 7, it follows that is semismooth at . Hence, for all sufficiently close to , we have On the other hand, Lemma 7 implies that is locally Lipschitz continuous near . Therefore, for all sufficiently close to , we have Since , it is concluded that . Thus, by the definitions of and , we have Then, in view of 59, 60, and 62, it is obtained that On the other hand, from 61, it follows that Thus, as sufficiently close to , we have . It yields In virtue of 65, we obtain As sufficiently close to , we know . The proof has been completed.
5. Numerical Experiments
In this section, we test the numerical performance of Algorithm 8 for solving benchmark test problems in NCP.
Algorithm 8 is implemented in MATLAB2008a on a PC 2.00â€‰GHZ CPU with 2.00â€‰GB RAM with the operation system of Windows 7. Throughout the experiments, the parameters in Algorithm 8 are chosen as follows: We use as the termination criterion.
Numerical results are reported in Tables 1â€“9, where we use the following denotations for conciseness:â€‰IT: the number of iterations,â€‰ST: the initial point ,â€‰CT: the CPU time depleted by the algorithm,â€‰: a solution of the NCP,â€‰: the final value of ,â€‰: zero vector with dimension,â€‰: unit vector with dimension,â€‰F: The algorithm fails to get a solution.




