Nonlinear Analysis: Optimization Methods, Convergence Theory, and ApplicationsView this Special Issue
A Smoothing Inexact Newton Method for Nonlinear Complementarity Problems
A smoothing inexact Newton method is presented for solving nonlinear complementarity problems. Different from the existing exact methods, the associated subproblems are not necessary to be exactly solved to obtain the search directions. Under suitable assumptions, global convergence and superlinear convergence are established for the developed inexact algorithm, which are extensions of the exact case. On the one hand, results of numerical experiments indicate that our algorithm is effective for the benchmark test problems available in the literature. On the other hand, suitable choice of inexact parameters can improve the numerical performance of the developed algorithm.
In the study of equilibria problems from economics, engineering, and management sciences, a complementarity problem (CP) often appears as the prominent mathematical model of the equilibria problems. Thus, it is the most practical interest to develop a robust and efficient algorithm for solving CP in the past decades (see the very recently published book  and the references therein). In this paper, we consider a nonlinear complementarity problem (denoted by NCP, for short): find a vector such that where is continuously differentiable function. Due to the extensive applications, NCP has attracted great attention of researchers (see, e.g., [2, 3] and the references therein). On the one hand, there have been many theoretical results on the existence of solutions and their structural properties. On the other hand, many attempts have been made to develop implementable algorithms for the solution of NCP.
A popular way to solve the NCP is to reformulate 1 to a nonsmooth equation via an NCP-function. Function is said to be the NCP-function if Define , given by Then, problem 1 is equivalent to Thus, any efficient algorithm for solving 4 can be directly applied to find the solution of problem 1.
Smoothing method is a fundamental approach to solve the nonsmooth equation 4. In this connection, one can see, for example, [4–16]. The basic idea of this method is to construct a smooth function to approximate . Let be a given smoothing parameter. We define a continuously differentiable function such that for any and there holds Then, problem 4 is approximated by a smooth equation: Let be a given positive sequence which tends to 0. Then, we can obtain an approximate solution of 4 by solving 6 with .
Recently, there are many different smoothing functions being employed to smooth the problem 4. Among them, the Fischer-Burmeiste function and the minimum function are two popular ones, which are defined by respectively. With the -norm of in the Fischer-Burmeiste function being replaced by a more general -norm with , Chen and Pan proposed a family of new NCP-function in . By combining the Fischer-Burmeiste function and the minimum function, Liu and Wu presented a smoothing function in  as follows: In , a symmetric perturbed Fischer-Burmeister function is constructed: Very recetly, in , a more general smoothing function with the -norm () was presented. It is shown that for the nonmonotone smoothing Newton algorithm developed in  the numerical performance of algorithm is greatly improved if .
In this paper, we first write 8 as then, we intend to investigate a new smoothing method of , and in virtue of this new method, we will design a smoothing inexact Newton algorithm to solve the obtained smooth equations. Since an inexact parameter at each iteration is admissive to obtain an inexact Newton search direction, the developed algorithm is more helpful to numerical computation than the similar ones available in the literature. By suitably choosing a sequence of inexact parameters in advance, numerical performance of the developed algorithm in this paper can be improved. On the other hand, without the assumption of strict complementarity, we can establish the theory of convergence for our algorithm, which is weaker than that in the existing results.
The rest of this paper is organized as follows. In Section 2, we study a smoothing method of the absolute function. Section 3 is devoted to development of a smoothing inexact Newton algorithm to solve the nonlinear complementarity problem. In Section 4, the global convergence and the superlinear convergence are established. Numerical results are reported in Section 5. Some final remarks are made in Section 6.
The following notions will be used throughout this paper. For any vector or matrix , denotes the transposition of . denotes the space of -dimensional real vectors. and denote the nonnegative and the positive subspaces in , respectively. For any vector , denotes a diagonal matrix, whose th diagonal element is and the vector , represents the set . represents the identity matrix with a suitable dimension. stands for the 2-norm. For any , and represent that is uniformly bounded and that tends to zero as , respectively.
2. Smoothing the Absolute Function
In this section, we will study a smoothing method of the absolute function.
We first present a function , given by It is clear that
Note that the generalized derivative of the absolution function is calculated by
We can conclude that, except for , is a good approximation to the generalized derivative of with a sufficiently small . Actually, the following result was proved in .
Proposition 1. For any given constant , there is a constant independent of and such that
Proposition 2. (1) For any , it holds that
The above inequality holds strictly for all .
(2) For any , .
(3) , where and denotes the distance from the point to the set .
3. A Smoothing Inexact Newton Algorithm for NCP
In this section, we will develop a smoothing inexact Newton algorithm for solving a smooth equation obtained by reformulating the NCP().
Since we construct an approximation of by Proposition 2, defined by
Define , given by Then, is approximated by a smooth equation:
Remark 3. The above smoothing method has been used to deal with NCP() in . Then, by solving a generalized Newton equation: so as to obtain a search direction at -iteration for the developed algorithm in . Different from the standard Newton method, is employed to replace in 22.
Taking into account the advantage of the standard smoothing Newton method (see, e.g., [12, 15, 16, 18]) in adjusting the the value of smoothing parameter automatically, we further transform problem 21 into a smooth optimization problem.
Denote . We define a function by with . Then, corresponding to any solution of 21, is an optimal solution of the following minimization problem: Conversely, if is an optimal solution of problem 24 with , then, solves the system of 21.
Next, we focus on developing an efficient algorithm to solve problem 24. Before presentation of such an algorithm, we further investigate the properties of problem 24. The following definitions are useful to describe the properties of .
Definition 4. A matrix is said to be a matrix if all principal minors of are nonnegative.
Definition 5. A function is said to be a function if for all with , there holds that
Definition 6 (see [19, 20]). Suppose that is a locally Lipschitz continuous function, which has the generalized Jacobian in the sense of Clarke , it is said to be semismooth (or strongly semismooth) at a point if and only if for any , as ,
We now prove the following results.
Lemma 7. Let and be defined by 23. Then, consider the following:(i) is continuously differentiable at any with its Jacobian matrix where Furthermore, if is a -function, then, is nonsingular for any .(ii) is locally Lipschitz continuous and semismooth on .
Proof. (i) Since is continuous differentiable at any , then is continuous differentiable. For any , by straightforward calculation, it yields 27 from the definition of .
Note that, for all , . It is clear that and are two positive diagonal matrices. Since is a -function, is also a -matrix for all . Thus, the principal minors of the matrix are nonnegative. By Definition 4, we know that the matrix is a -matrix. From Theorem 3.3 in , it follows that the matrix is nonsingular. Then, it is concluded that the matrix is nonsingular.
(ii) It is clear that is locally Lipschitz continuous and semismooth on . The proof is completed.
Algorithm 8 (a smoothing inexact Newton method).
Step 0. Choose constants , , , such that . Given an initial point , choose a sequence such that . Set and .
Step 1. If , then the algorithm stops. Otherwise, compute Step 2. Compute by Step 3. Set , where is the smallest nonnegative integer such that Step 4. Set and . Return to Step 1.
Remark 9. Similar to the idea in , we develop Algorithm 8 by incorporating an inexact parameter at each iteration to obtain an inexact Newton direction of search in 30. Generally, we choose a sequence in advance, such that . Suitable choice of can be used to improve the numerical performance of Algorithm 8 by generating an inexact Newton direction in Step 2 of Algorithm 8. The difference between Algorithm 8 and that developed in  lies in the distinct smoothing method. In , instead of the smoothing function 19, the Fischer-Burmeister function is adopted.
On the other hand, without the assumption of strict complementarity, we will establish the theory of global and local superlinearly convergences for Algorithm 8 in Section 4 under weaker conditions than the existing results.
If , then, Algorithm 8 reduces to a smoothing Newton algorithm, which is similar to that developed in . However, the definition of in this paper is different from that in .
The following result shows that Algorithm 8 is well-defined.
Theorem 10. Suppose that is a continuous differentiable -function. (1)For the system of linear equations 30 in the unknown variable , there exists a unique solution.(2)In finitely many back-tracking steps, in Step 3 of Algorithm 8 is obtained to satisfy 31.(3)Let be the sequence generated by Algorithm 8. Then, for all , .
Proof. We prove the first result.
Since is a continuously differentiable -function, it follows from Lemma 7 that the matrix is nonsingular at as . It implies that the system of linear equations 30 in the unknown variable has a unique solution. Thus, Step 2 of Algorithm 8 is well-defined.
We now prove the second result.
By 30, we have From the definitions of and , it follows that, for all , Thus, for any Denote Since is continuous differentiable at , then ; we conclude from 36 that It yields Since , there exists a constant such that, for any and , there holds that This demonstrates that Step 3 of Algorithm 8 is well-defined at each iteration.
Finally, we prove for all .
It is clear that . In other words, . Suppose that as . Then, . By 31, we get ; then . By 33, we have The last inequality implies that the desired result holds for . By mathematical induction method, we concluded that for all .
We have completed the proof of Theorem 10.
Remark 11. By Theorem 10, we know that Algorithm 8 is well-defined, and either it stops in finitely many steps or generates an infinite sequence with and for all . In the subsequent section, we will analyze the convergence of this sequence.
In this section, we will establish the global convergence and the superlinear convergence for Algorithm 8.
We first prove the following result.
Lemma 12. Let be defined by 20. If is a -function, then, for any , is coercive in . That is,
Proof. As , there exists a vector sequence which is unbounded. Then, there is a component such that is unbounded.
Define an index set . Then, is a nonempty set. Without loss of generality, we assume that , for all .
Let the sequence be defined by Then, it is clear that is bounded. Since is a -function, by Definition 5, we have where is one of the indices at which the max is attained. Since , and can be supposed to be independent of , we know as .
Next, we continue the proof in the following six directions.
Case 1 ( and ). Since is bounded by the continuity of and the definition of , we know that from 43. Thus, It yields Case 2 ( and ). It is clear that In virtue of we obtain Case 3 ( and ). In the same reason as in Case 1, . Thus, It yields Case 4 ( and ). Similar to Case , we can obtain Case 5 ( and is bounded). On the one hand, it is clear that is bounded. On the other hand, and is bounded; we know . Thus, + . It yields Case 6 ( and is bounded). Similar to Case , it is easy to prove that .
The proof is completed.
Remark 13. By Lemma 12, we can remove the assumption that the level set of the merit function is bounded. In addition, different from [13, 22, 27], the result of Lemma 12 is obtained in this paper for the nonsymmetric smoothing function.
Before statement of main results, we need the following assumption.
Assumption 14. The solution set of NCP 1 is nonempty and bounded.
Remark 15. Assumption 14 is a relatively weak condition to ensure the convergence of Algorithm 8. For example, in , it is assumed that the level sets
are bounded to prove the convergence of algorithm. Up to our knowledge, for the Fischer-Burmeister smoothing function, 54 is proved to be true under the condition that in NCP 1 is a uniform -function.
However, with our smoothing method, we can prove that 54 holds. Since the proof is only involved with the condition that is a -function, Assumption 14 is weaker than that in .
Theorem 16. Let be the iteration sequence generated by Algorithm 8. Under Assumption 14, the following statements are true. (i) and generated by Algorithm 8 are two monotonically decreasing and bounded sequences, whose limits are 0.(ii)Any accumulation point of is a solution of 24.(iii)Under Assumption 14, has at least one accumulation point with and .
Proof. (i) From Steps 2 and 3 of Algorithm 8, it is clear that , , and are three monotonically decreasing and bounded sequences.
(ii) By Lemma 12, we conclude that the sequence is bounded. Then, without loss of generality, we suppose that as , there exists such that If , then, by the definition of , and . From Lemma 7, it follows that is nonsingular. Thus, there exist a closed neighborhood and a constant , such that, for any and nonnegative integer satisfying , the following inequality holds true: If is large enough such that and , then, Therefore, as , it follows from that It contradicts . We conclude that and .
(iii) By Assumption 14, we know that is nonempty and bounded. Thus, has at least one accumulation point with and .
Theorem 17. Suppose that Assumption 14 is satisfied and is an accumulation point of the sequence generated by Algorithm 8. If all are nonsingular, then, converges to superlinearly; that is, . Moreover, .
Proof. By Theorem 16, we have and . Because all are nonsingular, it follows that for all sufficiently close to , From Lemma 7, it follows that is semismooth at . Hence, for all sufficiently close to , we have On the other hand, Lemma 7 implies that is locally Lipschitz continuous near . Therefore, for all sufficiently close to , we have Since , it is concluded that . Thus, by the definitions of and , we have Then, in view of 59, 60, and 62, it is obtained that On the other hand, from 61, it follows that Thus, as sufficiently close to , we have . It yields In virtue of 65, we obtain As sufficiently close to , we know . The proof has been completed.
5. Numerical Experiments
In this section, we test the numerical performance of Algorithm 8 for solving benchmark test problems in NCP.
Algorithm 8 is implemented in MATLAB2008a on a PC 2.00 GHZ CPU with 2.00 GB RAM with the operation system of Windows 7. Throughout the experiments, the parameters in Algorithm 8 are chosen as follows: We use as the termination criterion.
Numerical results are reported in Tables 1–9, where we use the following denotations for conciseness: IT: the number of iterations, ST: the initial point , CT: the CPU time depleted by the algorithm, : a solution of the NCP, : the final value of , : zero vector with dimension, : unit vector with dimension, F: The algorithm fails to get a solution.
Problem 1. In problem 1, and is given by
This problem has infinitely many solutions , where . The test results are listed in Table 1 by using different initial points.
Problem 2 (modified Mathiesen problem). In problem 1, and is given by
This problem has infinitely many solutions , where . The solutions are degenerate for or and nondegenerate for . With different starting points, we report results in Table 2.
Problem 3 (Kojima-Shindo problem). In problem 1, and is given by
This problem has one degenerate solution and one nondegenerate solution . We use different initial points and the test results are listed in Table 3.
Problem 4. In problem 1, and where
This problem has one solution . We use different starting points and the last initial point is randomly generated whose elements are in the interval . The test results are listed in Table 4.
Problem 5. In problem 1, and is given by
The test results are listed in Table 5 by using different initial points.
Problem 7. In problem 1, and with
This problem has only solutions . From the initial point , we solve this problem with different dimensions. The test results are listed in Table 7.
In the end of this section, we intend to test the effect of the inexact parameter on the efficiency of Algorithm 8.
From Tables 8 and 9, it is revealed that, for (which corresponds to the smoothing exact Newton method), Algorithm 8 may fail for some initial points. On the other hand, a suitable value of inexact parameter may greatly improve the efficiency of Algorithm 8.
From the numerical results, we conclude as follows:(1)In Tables 1–7, the choice of initial point only incurs weak impact on the CPU time and the iteration number of Algorithm 8. It indicates that the developed algorithm in this paper is robust even if for the randomly generated initial point.(2)From the results in Tables 8 and 9, the inexact parameter may play critical role in improve the numerical performance of Algorithm 8.
6. Final Remarks
In this paper, a smoothing inexact Newton method has been proposed for solving nonlinear complementarity problems based on a new smoothing function. Then, an implementable algorithm was developed. Under a suitable assumption, the global convergence and the superlinear convergence have been established for the algorithm. Results of numerical experiments indicate that our algorithm is effective for the benchmark test problems available in the literature.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This research is supported by the Major Program of the National Social Science Foundation of China (14ZDB136), the National Natural Science Foundation of China (Grant nos. 71210003, 11461015), and the Natural Science Foundation of Hunan Province (13JJ3002).
Z. H. Huang, J. Y. Han, and Z. W. Chen, “Predictor-corrector smoothing Newton method, based on a new smoothing function, for solving the nonlinear complementarity problem with a function,” Journal of Optimization Theory and Applications, vol. 117, no. 1, pp. 39–68, 2003.View at: Publisher Site | Google Scholar
R. Miff, “Semi-smooth and semi-convex function in constrained optimization,” Mathematical Programming, vol. 15, pp. 957–972, 1997.View at: Google Scholar
F. H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, NY, USA, 1983.View at: MathSciNet
L. Qi, “Convergence analyis of some algorithms for solving nonsmooth equations,” SIAM Journal on Control and Optimization, vol. 18, pp. 227–244, 1993.View at: Google Scholar
J. Tang, S. Liu, and C. Ma, “One-step smoothing Newton method for solving the mixed complementarity problem with a function,” Applied Mathematics and Computation, vol. 215, pp. 2326–2336, 2009.View at: Google Scholar