Abstract

We consider a class of absolute-value linear complementarity problems. We propose a new approximation reformulation of absolute value linear complementarity problems by using a nonlinear penalized equation. Based on this approximation reformulation, a penalized-equation-based generalized Newton method is proposed for solving the absolute value linear complementary problem. We show that the proposed method is globally and superlinearly convergent when the matrix of complementarity problems is positive definite and its singular values exceed 1. Numerical results show that our proposed method is very effective and efficient.

1. Introduction

Let be a given function. The complementarity problems, CP() for short, is to find a solution of the system

The CP() is called the linear complementarity problems (for short LCP) if is an affine mapping of the form where and . Otherwise, the CP() is called the nonlinear complementarity problems (NCP()).

The systematic study of the finite-dimensional CP() began in the mid-1960s; in a span of five decades, the subject has developed into a very fruitful discipline in the field of mathematical programming. The developments include a rich mathematical theory, a host of effective solution algorithms, a multitude of interesting connections to numerous disciplines, and a wide range of important applications in engineering and economics (see, e.g., [14] and the references therein).

Generalized Newton method (semismooth Newton method) is one of efficacious algorithms for solving CP(). The main idea of semismooth Newton method is based on an equivalent reformulation of the complementarity problems consisting of a nonsmooth equation and then solving the nonsmooth equation by Newton type method (see, e.g., [5, 6]). Most reformulations of the CP() are based on the Fischer-Burmeister function [7] (see, e.g., [810] and the references therein). Chen et al. [11] introduced a penalized Fischer-Burmeister function and proposed a new semismooth Newton method based on this new NCP function. Kanzow and Kleinmichel [12] proposed a new, one-parametric class of NCP functions based on Fischer-Burmeister function and gave a semismooth Newton method via these NCP functions. Kanzow [13] researched an inexact semismooth Newton method based on Fischer-Burmeister function and penalized Fischer-Burmeister function. Ito and Kunisch [14] studied a semismooth Newton method based on the max-type NCP function.

All generalized Newton methods mentioned above involve the continuously differentiable assumption on in CP(). The existed generalized Newton methods proposed are based on the equivalent reformulation via NCP functions. To the best of our knowledge, until now, there exist very few literature resources to study the complementary problems when the involved function is not differentiable. However, as in many practical problems, is not differentiable; for instance, ; see Noor et al. [15]. This is our focus in this paper.

In this paper, we consider the following absolute-value linear complementarity problems (for short AVLCP ()): find such that where and .

The AVLCP() is a special case when is a piecewise-linear function in CP() and can be viewed as an extension of the LCP. This complementarity problem was first introduced and studied by Noor et al. [15] in 2012. Noor et al. proposed a generalized AOR method via establishing the equivalence between AVLCP() and the fixed point problem using the projection operator, but the convergence rate is linear. Moreover, the study on AVLCP() is in its infancy and, to the authors knowledge, there has been no work except for the above-mentioned results of Noor et al. [15]. These observations motivated us to further improve the theory and numerical system for solving AVLCP().

We use the penalty technique to show that the AVLCP() in are approximately equivalent to a nonlinear penalized equation, which was first introduced for solving AVLCP(). It is worth mentioning that the penalty technique has been widely used in solving nonlinear programming, but it seems that there is a limited study for complementarity problems (see [1618]). We show that the solution to this penalized equation converges to that of the AVLCP() at an exponential rate when the penalized parameter tends to infinity. We again use the generalized Jacobian based on subgradient to analyze a generalized Newton method for solving the nonlinear penalized equation under some mild assumptions. The algorithm will be shown to be superlinearly convergent and can start from an arbitrary point. Preliminary numerical experiments are also given to show the effectiveness and efficiency of the proposed method.

The rest of this paper is organized as follows. In Section 2, we present some notations and the well-known results. In Section 3, we provide a penalized equation for approximating the AVLCP() and its properties. In Section 4, a generalized Newton method is introduced for solving the penalized equation. In Section 5, we introduce the numerical results of our methods.

2. Preliminaries

For convenience, we will now briefly explain some of the terminologies that will be used in the next section. denotes the -dimensional Euclidean space. All vectors in are column vectors. Let be an real matrix. The scalar product of two vectors and is denoted by . For , the -norm of is defined as . When , the -norm becomes the -norm . For any and , . stands for the vector in of absolute values of components of . will denote the vector with components equal to 1, 0, or −1 depending on whether the corresponding component of is positive, zero, or negative. will denote a diagonal matrix corresponding to . The plus function , which replaces the negative components of by zeros, is a projection operator that projects onto the nonnegative orthant; namely, . For a solvable matrix equation , we will use the MATLAB backslash to denote a solution . The generalized Jacobians of and of based on a subgradient [20] of their components are given by the diagonal matrices and , respectively, where and is the the diagonal matrix whose diagonal entries are equal to , 0, or a real number depending on whether the corresponding component of is positive, negative, or zero.

Definition 1 (see [21]). Let be a matrix; the matrix is called (1)positive definite if there exists a constant such that for any ,(2)bounded if there exists a constant such that for any .

Lemma 2 (Hölder’s inequality). Let . Then where and are real numbers such that .

Lemma 3 (see [15]). Let be a closed and convex set in . A vector solves the AVLCP () if and only if solves the following absolute-value variational inequalities:

Lemma 4. Let , . Then where denotes the Frobenius norm .

Lemma 5 (see [22]). The singular values of the matrix exceed 1 if and only if the minimum eigenvalue of exceeds 1.

Lemma 6 ((Banach perturbation lemma) [21]). Let and assume that is invertible with . If and , then is also invertible and

Lemma 7 (see [23]). Let . Then

Lemma 8. Let . Then

3. A Penalized-Equation Approximation Reformulation of AVLCP()

In this section, we construct a nonlinear penalized equation corresponding to absolute-value linear complementarity problem (3).

Find such that where is the penalized parameter and .

We will prove that the solution to the penalized equation (10) converges to that of the AVLCP(). Thus, we make the following assumptions on the system matrix :(A1) is positive definite and in Definition 1;(A2)the entries of satisfy , for all with .

Under assumption (A1), the solution of AVLCP() is unique [15]. Our main results in this section are as follows. First, we start our discussion with the following lemma.

Lemma 9. Let be the solution to nonlinear penalized equation (10). Then there exists a positive constant , independent of , , and such that where and are parameters used in (10).

Proof. Left-multiplying both sides of (10) by gives Without loss of generality, we assume that , where , , . Other cases can be transformed into this by reordering the system.
When , then ; thus (11) is trivially satisfied. We only consider the case when . In this case can be decomposed into , where , , We now decompose into where , , , and ; then (12) becomes where .
From assumption (A1) and , we have , and holds, where the last inequality follows from . From assumption (A2) and , we also have . Note that ; one has that
It follows from (14), (16), and that we have the following inequality:
By using Lemma 2 in (17), we get where satisfying . Since , we thus have the following inequality:
Since all norms on are equivalent for a fixed positive integer , then it follows that there exists a positive constant such that and thus the left hand in (16) can be written as
Combining with (14), we obtain together with (16); dropping the first three terms in (22) and using Lemma 2, we have Since , we thus have from the above inequality where . Thus, the proof of this lemma is completed.

Using Lemma 9, we can establish the relationship between solutions of penalized equation (10) and solutions of the AVLCP().

Theorem 10. Let and be the solution to AVLCP() and nonlinear penalized equation (10), respectively. Then there exists a positive constant , independent of , , , and such that where and are parameters used in (10).

Proof. Let ; then and the vector can be decomposed as where .
Let ; one has that and therefore .
Replacing in (5) by gives left-multiplying (10) by we have adding up to (29) and (30), we get
further we get from (31) that where
One further has that thus the last inequality is true by the positive definiteness of .
Hence
On the other hand, using the Cauchy-Schwarz inequality, Lemma 4, and (11) to (36), we get this implies that where . From (11), (27), (38), and the triangle inequality, we obtain We complete this theorem.

4. A Penalized-Equation-Based Generalized Newton Method and Its Convergence

In this section, we present a generalized Newton method for solving nonlinear penalized equation (10). We begin by defining the vector function specified by the nonlinear penalized equation (10) as follows:

Let ; a generalized Jacobian of is given by where and is a diagonal matrix whose diagonal entries are equal to , 0, or a real number depending on whether the corresponding component of is positive, negative, or zero. The generalized Newton method for finding a solution of the equation consists of the following iteration:

Replacing by its definition (40) and setting by (41) give Thus, solving for gives which is our final generalized Newton iteration for solving the nonlinear penalized equation (10). In the following, we can establish the penalized-equation-based generalized Newton method for solving AVLCP(, ).

Algorithm 11 (penalized-equation-based generalized Newton algorithm). We have the following.

Step 1. Given constants , , , , and and a starting point , set .

Step 2. Calculate from the generalized Newton equation starting from associated with .

Step 3. If , and , stop; otherwise, set and go to Step 2. Denote the accumulation point of by .

Step 4. If , then stop; otherwise, let , choose new starting point , set , and go to Step 2.

4.1. Existence of Accumulation Point at Each Generalized Newton Iteration

We will show that the sequence generated by generalized Newton iteration (45) converges to an accumulation point associated with . We firstly give the following sufficient conditions that the generalized Newton iteration (45) is well defined.

Lemma 12. If the singular values of exceed 1. Then exists for any diagonal matrices and , where , is the diagonal matrix with diagonal elements equal to or 0 and is the diagonal matrix with diagonal elements equal to , 0, or a real number .

Proof. If is singular, then exists such that Thus, we have the following contradiction for and , where the first inequality follows from Lemma 5. Hence, is nonsingular.

We now establish boundness of the generalized Newton iteration (45) and thus existence of an accumulation.

Theorem 13. Let the singular values of exceed 1. Then the iteration of the Algorithm 11 is well defined and bounded. Consequently, there exists an accumulation point such that .

Proof. By Lemma 12, exists; hence, the generalized Newton iteration is well defined.
We now prove the boundedness of the iterative sequence . Suppose that is unbounded; then there exists a subsequence with and nonzero such that , , where and are fixed diagonal matrices extended from the finite number of possible configurations for in the sequence and in the sequence , respectively, such that the bounded subsequence converges to . Hence
By letting , we obtain since . This contradicts the nonsingularity which follows from Lemma 12. Therefore, the iterative sequence is bounded and there exists an accumulation point such that .

We then establish the finite termination of generalized Newton iteration (45).

Theorem 14. Let the singular values of exceed 1. If , , and for some for the well defined iteration (45) in Algorithm 11, then solves the nonlinear penalized equation (10).

Proof. The generalized Newton iteration is well defined by Lemma 12, and if , and , then giving the result.

Furthermore, we have the following results.

Theorem 15. Suppose that the singular values of exceed 1 and is the unique solution of the nonlinear penalized equation (10). Then, for any such that
the generalized Newton iteration (45) reaches in one iteration.

Proof. The theorem can be proved in a similar way to the one in [24, Lemma 2.1]. We omit it here.

Remark 16. Not that the “sign match” property (52) holds if is sufficiently near . Hence, by Theorem 15, global convergence is then obvious for generalized Newton iteration (45).

Now, we discuss the globally linear convergence of generated by generalized Newton iteration (45).

Theorem 17. If holds for all sufficiently large and , then the generalized Newton iteration (45) converges linearly from any starting point to a solution of the nonlinear penalized equation (10).

Proof. Since according to the assumption and the definition of , then exists for any by Lemma 6. We also have by the same lemma that
Let be a solution of the nonlinear penalized equation (10). To simplify notation, let , , , and . Noting that , , , and , we have where the first inequality follows from and .
Hence, one has that Thus
Applying Lemmas 7 and 8 and when is sufficiently large, we get
Letting and taking limits in both sides of the last inequality above, we have where the last inequality in (58) follows from .
Consequently, the sequence converges linearly to a solution .

In the above proof, the choice of is arbitrary; hence we have the following result.

Corollary 18. Assume that and for any sufficiently large . Then the nonlinear penalized equation (10) has a unique solution for any .

Finally, we give the globally superlinear convergence of generated by generalized Newton iteration (45).

Theorem 19. Assume that and for any sufficiently large . Then the generalized Newton iteration (45) converges superlinearly from any starting point to a solution of the nonlinear penalized equation (10).

Proof. According to Lemma 6, one has that where the last inequality follows from the sequence .
Combining this with (58) in Theorem 15, we have Letting and taking limits in (61), we can see that the sequence generated by generalized Newton iteration (45) converges superlinearly to a solution .

4.2. Convergence of Penalized-Equation-Based Generalized Newton Method

In this subsection, we will focus on the convergence of Algorithm 11. We first present the global convergence.

Theorem 20. Let the singular values of exceed 1. Then the sequence generated by Algorithm 11 is bounded. Consequently, there exists an accumulation point of the nonlinear penalized equation (10).

Proof. Since is an accumulation point of , it follows from Theorem 13 that the sequence is bounded, we can thus obtain the boundness of . Hence there exists an accumulation point of such that the nonlinear penalized equation (10) holds, giving the results.

We then establish the linear convergence of Algorithm 11.

Theorem 21. Under the assumption that , , Algorithm 11 converges linearly from any starting point to a solution of the nonlinear penalized equation (10).

Proof. Taking into account “match property” (52), the theorem can be proved in a similar way to that of Theorem 15.

Finally, we establish the superlinear convergence of Algorithm 11.

Theorem 22. Under the assumption that , , Algorithm 11 converges superlinearly from any starting point to a solution of the nonlinear penalized equation (10).

Proof. Taking into account “match property” (52), the theorem can be proved in a similar way to that of Theorems 15 and 17.

5. Numerical Results

In this section, we consider several examples to show the efficiency of the proposed method by running in MATLAB 7.5 with Intel(R) Core (TM) of 2 2.70 GHz and RAM of 2.0 GB. Throughout these computational experiments, the parameters used in the algorithm are set as , , , and . The accumulation point of Algorithm 11 is written as .

Example 1. Let the matrix of AVLCP() be given by and . The solution of AVLCP() is . The computational results are shown in Table 1.

Example 2. Let the matrix of AVLCP() be given by and . The solution of AVLCP() is . The computational results are shown in Table 2.

Example 3. Let the matrix of AVLCP() be given by and . The solution of AVLCP() is . The computational results are shown in Table 3.

Example 4. Let the matrix of AVLCP() be given by Let . Then choose initial point as . We compared our algorithm with the existing methods in [15, 19]. The computational results are shown in Table 4.

From Table 1 to Table 4, we can see that our method has some nice convergence which coincides with our results.

6. Conclusion

In this paper, we propose a new approximation to absolute-value linear complementarity problems (3) by using the nonlinear penalized equation (10), based on which a generalized Newton method is proposed for solving this penalized equation. Under suitable assumptions, the algorithm is shown to be both globally and superlinearly convergent. The numerical results presented showed that the generalized Newton method proposed by us is efficient. The results and ideas of this paper may be used to solve the absolute variational inequalities and related optimization problems.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.