Abstract

Two kinds of the Levenberg-Marquardt-type methods for the solution of vertical complementarity problem are introduced. The methods are based on a nonsmooth equation reformulation of the vertical complementarity problem for its solution. Local and global convergence results and some remarks about the two kinds of the Levenberg-Marquardt-type methods are also given. Finally, numerical experiments are reported.

1. Introduction

In this paper, we consider the following kind of vertical complementarity problem: where the functions are assumed to be continuously differentiable and denotes the th component of the function . The above vertical complementarity problem is of concrete background, for instance, the Karush-Kuhn-Tucker system of nonlinear programs, complementarity problems, and many problems in mechanics and engineering. When , (1.1) is the generalized complementarity problem, which has been considered in [1]. When satisfied solves (1.1), where “min” denotes the componentwise minimum operator, , for, , , are also continuously differentiable, and , for , are finite index sets. Throughout this section, we denote

Thus, (1.2) can be briefly rewritten as which is nonsmooth equation. Nonsmooth equations are much more difficult than smooth equations. Solving the above equations is a classical problem of numerical analysis, see [111]. Many existing classical methods for smooth equations cannot be extended to nonsmooth equations directly. Those difficult motivate us to seek higher-quality methods for nonsmooth equations. One of the classical iterative methods for solving (1.4) is the Levenberg-Marquardt-type method, which is based on the generalized Jacobian. The Levenberg-Marquardt method and its variants are of particular importance also because of their locally fast convergent rates. Levenberg-Marquardt method is also a famous method for nonlinear equations, which can be regarded as a modification of Newton method [1218]. In the smoothing-type methods, some conditions are needed to ensure that the Jacobian matrix is nonsingular; the Levenberg-Marquardt method does not need such conditions [1219].

By the above analysis, the purpose of this paper is to consider using the Levenberg-Marquardt-type methods for solving (1.1). We reformulate (1.1) as a system of nonsmooth equations and present two kinds of the Levenberg-Marquardt-type methods for solving it. The outline of the paper is as follows. In Section 2, we recall some background concepts and propositions which will play an important role in the subsequent analysis of convergence results. In Section 3, we give the two kinds of Levenberg-Marquardt-type methods and present their local and global convergence results and some remarks. Finally, numerical experiments and some final remarks about the numerical experiments are listed.

Throughout this paper, denotes the space of -dimensional real column vectors and denotes transpose. For any differentiable function , denotes the gradient of at .

2. Preliminaries

A locally Lipschitz function is said to be semismooth at provided that the following limit

exists for any . The class of semismooth functions is very broad; it includes smooth functions, convex functions, and piecewise smooth functions. Moreover, the sums, differences, products, and composites of semismooth functions are also semismooth. Maximum of a finite number of smooth functions are semismooth too.

Proposition 2.1. Function in (1.4) is semismooth.

Proposition 2.2. The following statements are equivalent: (i) is semismooth at .(ii)For , . (iii)One has If for any , , where , then one says is -order semismooth at . Obviously, -order semismoothness implies semismoothness. From [8], one knows that if is semismooth at , then, for any

In [6], Gao considered a system of equations of max-type functions, which is the system of equations of smooth compositions of max-type functions. For solving the systems of equations, Gao take as a tool instead of the Clarke generalized Jacobian, -differential, and -differential. Based on [6], we give the following for in (1.4): Obviously, is a finite set of points and can be easily calculated by determining , , and , , . In what follows, we use (2.6) as a tool instead of the Clarke generalized Jacobian and -differential in the Levenberg-Marquardt method (I).

Proposition 2.3. The defined as (2.6) is nonempty and compact set for any , and the point to set map is also upper semicontinuous.

Proposition 2.4. Suppose that and are defined by (1.4) and by (2.6), and all are nonsingular. Then there exists a scalar such that holds for some constants , , and is a neighbor of .

By the continuously differentiable property of in (1.1), the above Proposition 2.4 can be easily obtained.

Definition 2.5. One says that is BD-regular at if all elements in are nonsingular.

Definition 2.6. is a matrix if and only if there exists an index such that and , for all , .

3. The Levenberg-Marquardt-type Methods and Their Convergence Results

In this section, we present two kinds of the Levenberg-Marquardt-type methods for solving vertical complementarity problem (1.1). Firstly, we briefly recall some results on the Levenberg-Marquardt-type method for the solution of nonsmooth equations and their convergence results, see, for example, [7] and also [9, 10]. And we also give the new kinds of the Levenberg-Marquardt methods for solving vertical complementarity problem (1.1) and analyze their convergence results. We are now in the position to consider exact and inexact versions of the Levenberg-Marquardt-type method.

Given a starting vector , let

where is the solution of the system

In the inexact versions of this method, can be given by the solution of the system

where is the vector of residuals, and we can assume for some .

We now give a local convergence Levenberg-Marquardt-type method (I) for (1.1) as follows.

3.1. The Levenberg-Marquardt Method (I)

Step 1. Given , , , .

Step 2. Solve the system to get , for , and is the vector of residuals

Step 3. Set ; if , terminate. Otherwise, let , and go to Step 2.

Based upon the above analysis, we give the following local convergence result. Similar results have also been mentioned in [9].

Theorem 3.1. Suppose that is a sequence generated by the above method and there exist constants , for all , and there exist constant , . Let be a BD-regular solution of . Then the sequence converge -linearly to for .

Theorem 3.2. Suppose that is a sequence generated by the above method and there exist constants , for all . , . And suppose that there exist constants , . Then the sequence converge -linearly to for .

By Propositions 2.2, 2.3, and 2.4, we can get the proof of Theorem 3.2 similarly to [9, Theorem 3.1], so we omit it.

Remark 3.3. Theorems 3.1 and 3.2 hold with .

Remark 3.4. In the Levenberg-Marquardt method (I), if is computed by (3.3), Theorems 3.1, 3.2, and Remark 3.3 can also be obtained.

When in (1.1), the vertical complementarity problem reduces to the well-known generalized complementarity problem(GCP) in [1]. The GCP is to find satisfying

where , and are continuously differentiable functions. If , then the generalized complementarity problem (GCP) is the nonlinear complementarity problem (NCP). In the following, we give the Levenberg-Marquardt-type method (II) for the generalized complementarity problem  (GCP). The different merit function is based on the well-known - function

Then , and we denote the corresponding merit function as

where is a continuously differentiable function. Now, we give the following Levenberg-Marquardt-type method (II).

3.2. The Levenberg-Marquardt Method (II)

Step 1. Given a staring vector , , , , , , .

Step 2. If , stop.

Step 3. Select an element , and find an approximate solution of the system where are the Levenberg-Marquardt parameters. If the condition is not satisfied, set .

Step 4. Find the smallest such that Set , let , and go to Step 2.

Notice that if is nonsingular in (3.9), the choice of , at each step, is allowed by the above algorithm. Then (3.9) is equivalent to the generalized Newton equation in [4]. In what follows, as usual in analyzing the behavior of algorithms, we also assume that the above Levenberg-Marquardt method (II) produces an infinite sequence of points. Based upon the above analysis, we can get the following global convergence results about vertical complementarity problem (1.1). The main proof of the following theorem is similar to [7, Theorem 12]. But the system (3.9), which is used for the solution of , is different from [7, Theorem 12].

Theorem 3.5. Suppose that there exist constants , . Then each accumulation point of the sequence generated by the above Levenberg-Marquardt method (II) is a stationary point of .

Proof. Assume that . If there are infinitely many such that , then the assertion follows immediately from [5, Proposition 1.16]. Hence we can assume without loss of generality that if is a convergent subsequence of , then is always given by (3.9). We show that for every convergent subsequence for which there holds In the following, we assume that . Suppose that is not a stationary point of . By (3.9), we have so Note that the denominator in the above inequality is nonzero; otherwise by (3.14), we have . would be a stationary point and the algorithm would have stopped. By assumption and Proposition 2.4, there exists a constant such that From the above inequality and (3.13), we get Formula (3.13) now readily follows from the fact that we are assuming that the direction satisfies (3.10) with , while the gradient is bounded on the convergent sequence . If (12) is not satisfied, there exists a subsequence of This implies, by (3.10), that . Together with (3.17) one implies contradicting (3.12). The sequence is uniformly gradient related to according to the definition given in [5], and the assertion of the theorem also follows from [5, Proposition 1.16]. We completed the proof.

Remark 3.6. In the Levenberg-Marquardt method (II), (3.11) can be replaced by the following nonmonotone line search: where , , and is a nonnegative integer.

Remark 3.7. Let the assumptions of Theorem 3.5 hold. Equation (3.9) is replaced by (3.3). Then each accumulation point of the sequence generated by the above Levenberg-Marquardt method (II) is a stationary point of .

Remark 3.8. The assumption that there exist constants , in Theorems 3.1, 3.5, and Remark 3.7 can be easily obtained, for the continuously differentiable of in (1.1).

Remark 3.9. Let be a stationary point of , nonsingular, and a matrix. Then is a solution of GCP().

Remark 3.10. We can use in [20] to construct merit function in the Levenberg-Marquardt method (II).

Remark 3.11. We can use a family of new NCP functions based on the Fischer-Burmeister function to construct merit function in the Levenberg-Marquardt method (II), which is defined by where is any fixed real number in the interval and denotes the -norm of . Numerical results based on the function for the test problems from MCPLIB indicated that the algorithm has better performance in practice [21].

4. Numerical Tests and Final Remarks

In this section, in order to show the performance of the above Levenberg-Marquardt type methods, we present some numerical results for the Levenberg-Marquardt method (I) and the Levenberg-Marquardt method (II). The results indicate that the Levenberg-Marquardt algorithms work quite well in practice. We coded the algorithms in Matlab 7.0. Some remarks are also attached.

Example 4.1. We consider the following vertical complementarity problem: where the functions Both and are continuously differentiable functions.

We use the Levenberg-Marquardt method (I) to compute Example 4.1. Results for Example 4.1 with initial point , and , are presented in Table 1.

We use the Levenberg-Marquardt method (I) to compute Example 4.1. Results for Example 4.1 with initial point , and , and , are presented in Table 2.

When we compute by (3.3) in the Levenberg-Marquardt method (I) to compute Example 4.1. Results for Example 4.1 with initial point , , and , and , are presented in Table 3.

Remark 4.2. From the numerical results for the Levenberg-Marquardt method (I) in Table 1, Table 2, and Table 3, we can see that the modification of (3.4) in the Levenberg-Marquardt method (I) works quite better in practice than (3.3) which has been used in [7].

Example 4.3. We consider the following vertical complementarity problem: where the functions Both and are continuously differentiable functions.
Now, we use the Levenberg-Marquardt method (II) to compute Example 4.3. For the special construction of Example 4.3, we can use (2.6) to get the Clarke generalized Jacobian in the Levenberg-Marquardt method (II). Results for Example 4.3 with initial point , , and and , are presented in Table 4.

Then we compute by (3.3) in the Levenberg-Marquardt method (II) to compute Example 4.3. Results for Example 4.3 with initial point , , and and , are presented in Table 5.

Remark 4.4. From the numerical results for in the Levenberg-Marquardt method (II) in Table 4 and Table 5, we can see that the modification of (3.9) in the Levenberg-Marquardt method (II) works quite as well as (3.3) which has been used in [7].

Acknowledgments

This work is supported by National Science Foundation of China (10971118, 11171221, 11101231), a project of Shandong province Higher Education Science and Technology Program (J10LA05), Shanghai Municipal Committee of Science and Technology (10550500800), Innovation Program of Shanghai Municipal Education Commission (10YZ99).