Abstract

The linear complementarity problem is receiving a lot of attention and has been studied extensively. Recently, El foutayeni et al. have contributed many works that aim to solve this mysterious problem. However, many results exist and give good approximations of the linear complementarity problem solutions. The major drawback of many existing methods resides in the fact that, for large systems, they require a large number of operations during each iteration; also, they consume large amounts of memory and computation time. This is the reason which drives us to create an algorithm with a finite number of steps to solve this kind of problem with a reduced number of iterations compared to existing methods. In addition, we consider a new class of matrices called the E-matrix.

1. Introduction

In the last decades, the complementarity problem has played a very important role in several domains. It has been the focus of many researchers and scientists. As an example, we can cite the works of Cottle [1, 2] published between 1964 and 1966. Note that the above problem appears in older works without reporting the name of complementarity problems. In particular, we mention the work of Du Val [3] and Ingleton [4]. Let be a function defined in to . A complementarity problem (CP) associated with the function consists to find a vector such as , and . If the function is affine, it is presented in form where is a vector of and is a square matrix of order . Then, we have a linear complementarity problem denoted by . The origin of the name complementarity comes from the fact that if is a solution of a linear complementarity problem, then or for all . The linear complementarity problem has been widely studied by researchers from a variety of backgrounds, which has been a rich and varied literature (see [517] and references therein). In [18], Kadiri and Yassine described a new purification method for solving monotonic linear complementarity problems. In their paper, the proposed method is associated with each iterate of the sequence, generated by an interior point method, one basis that is not necessarily feasible. The authors proved that, under the strict complementarity and nondegeneracy hypotheses, the sequence of bases converges on a definite number of iterations to an optimal basis which gives the exact solution to the problem. In [19], to solve the linear complementarity problem, Alves and Judice [19] proposed a pivoting heuristic based on tabu search and its integration into an enumerative framework. Recently, El foutayeni et al. [2028] added a contribution to the resolution of the linear complementarity problem. In particular, in [27], they proved the equivalent between solving a linear complementarity problem and solving a nonlinear equation. Also, they give a globally convergent hybrid algorithm which is based on vector divisions for solving the linear complementarity problem. In [27], the same authors determined the conditions that allow a linear complementarity problem to have a solution. They calculated the solution when it exists. In [24], they proposed to solve the linear complementarity problem in the case where it has several solutions. The aim of [29] is to propose an iterative method of interior points that converge in the polynomial time to the best solution of the linear complementarity problem; this convergence requires at most iterations, where is the number of the variables and is the length of a binary coding of the input; furthermore, the algorithm does not exceed arithmetic operations until its convergence. In [24], El foutayeni and Khaladi have shown that the linear complementarity problem is completely equivalent to finding the fixed point of the map , and they showed that to find an approximation of the solution to the second problem, they proposed an algorithm that starts from an arbitrary interval vector , then they generalize a sequence of the interval vector that converges to the best solution of linear complementarity problems. Newly, in [30], for solving the linear complementarity problem, Wang et al. [31] propose an interior point method to find the solution of the linear complementarity problem, where the matrix is a real square hidden -matrix. In this context, we can see the works [3139].

It is well known that it is impossible to ensure the existence of a linear complementarity problem solution associated with any matrix and vector. This leads us to ask the following questions: Under which conditions on the matrix and the vector does this type of problem admit a solution, and if it exists, what are the conditions for the uniqueness of this solution? Once the existence and uniqueness are assured, how we can express this solution according to the data of the problem? Despite the great importance of the linear complementarity problems in several areas, they are not yet completely resolved. However, many results exist and give good approximations of the solutions, but the main disadvantage of many existing methods resides in the fact that, for large systems, they require a large number of operations during each iteration and they consume large amounts of memory and computation time. This is the reason that drives us to look for new methods that deal with this kind of problem which lower the number of operations at each iteration compared to existing methods.

In the present work, we formulate an algorithm that can solve the linear complementarity problem . This algorithm has a finite number of steps and converges to the solution. Also, we consider a new class of matrices called the -matrix. The algorithm has been surprisingly effective. A numerical implementation of the algorithm is given in this work.

We organized this document as follows. In Section 1, we give preliminary definitions and we list some initial notations that we need throughout this document. In Section 2, we present the proposed linear complementarity problem under some conditions. In Section 3, we formulate an algorithm for solving our linear complementarity problem with the class’s matrix. And in Section 4, we give numerical examples to confirm the theoretical part of our algorithm.

2. Preliminary and Notations

In this section, we recall preliminary definitions and general notations used in this paper.

For any positive integer , let be the ensemble of all real matrices. We denote by the matrix of identity, is the column of , and is a vector where all entries equal to 1. We also use the following notation and to represent the row and the column of the matrix, respectively; is the ensemble of all vectors of , and its cardinality is equal to . For each , we define his sign vector by if and if with . Then, . For each , we denote .

Definition 1. Given, the set of matricesis called an interval matrix.

Definition 2. A square interval matrixis called regular if eachis regular and singular if it existssingular.

Proposition 3. An interval matrixis singular if and only if the inequalityhas a nontrivial solution.

Proof. We suppose that contains a singular matrix , then there exist such that , which implies that Conversely, let (2) hold for Define and by , for . If or and taking into account that , then Hence, for each .

Therefore, is singular, and since for each due to it follows that

Hence, and is singular.

We use the previous proposition to show the regularity or the singularity of the matrix in some cases.

Proposition 4. Letbe regular and lethold for someand

So, there exists ansatisfyingand

Proof. We assume that for each implies so . We shall prove in this case that i.e., the inequality holds for each

Since this is clear fact for If so which together proves (3).

Now, from (3), we have by (), with then is singular following the first proposition, and this is a contradiction.

We use the Sherman-Morrison formula to prove the efficiency of the proposed algorithm.

Let be nonsingular matrix, , and let .

So, we have if then is singular, and if we deduce that (see [40]).

3. Main Results

It is a known fact (see El foutayeni and Khaladi [27]) that the linear complementarity problem is completely equivalent to solving the equation where and . To present the algorithm, we define a new class of matrices that we call the class of -matrices.

Definition 5. Let. The matrixis called the-matrix if all the principal minors ofare nonzero and if all the eigenvalues ofare different from.

Notation 6. We denotethe set of-matricessuch that MP is the set of the principal minors ofandis the eigenvalues of.

Lemma 7. We have the equivalence between the following four properties of a matrix : (1)All the principal minors of matrixare positive(2)For each vectorthere existssuch thatwith(3)For each vectorthere exists a diagonal matrixsuch that(4)The real eigenvalues associated withand every principal minor ofare positive

Proof. We denote by the set of indices We select an arbitrary vector and we assume that for each with . Let Clearly If is the main submatrix with rows and columns of , is the vector wherein coordinates have indices of and coincide with those of ; hence, for , the coordinates of the vector coincide with So, there exists a diagonal matrix (over ) such as i.e., . Therefore, the matrix is singular. Note that the principal minors of are positive. So, we have the same result for since is diagonal positive. This is a contradiction that proves the implication. . We suppose that is a vector, , and is the index for which There exists a positive number such as is positive. To prove this, it is sufficient to choose as the diagonal matrix, where and for . Let and let be a real eigenvalue of with the eigenvector We denote by the vector with the coordinates that coincide with those of for In accordance with (4), there exists a diagonal matrix such that But evidently, , since . Then, we have . Now, using the fact that the determinant of a matrix is equal to the product of all eigenvalues of and that the product of the nonreal eigenvalues of a real matrix is positive, we can easily complete the proof.

Lemma 8. Any matrix-matrix is an-matrix.

Proof. Let be a -matrix. Then, all principal minors of are positive. From the previous Lemma, we can deduce that all real eigenvalues of are positive sinceis an-matrix.

It is easy to check that the identity matrix is an -matrix; every symmetric positive definite matrix is an -matrix and any Hilbert matrix is an -matrix (we recall that a Hilbert matrix is a square matrix of general terms ).

Theorem 9. For all matrixand vector, the following algorithm “SolveLCP” has a finite number of steps and converges to the solution of the linear complementarity problemif it exists.

An algorithm for solving the linear complementarity problem
function
while for some
if
display(no solution)
return
end
if log
display(no solution)
return
end
if
;
;
end
end
end
Algorithm 1.

The linear complementarity problem implies

According to a change of variables, and . Hence, if we consider with , then the equation (4) becomes

The problem is that we do not know the values of either or , but we know that they must satisfy , i.e., for each

Step 1. The algorithm beginning with the vector and during each pass of the loop “while” it increases by 1; hence becomes which means that after a finite number of steps, will become superior to and the algorithm will end.

Step 2. The initial point is ; it is achievable since is a regular matrix; indeed, we have , so all the principal minors of are nonzero, and for all eigenvalues of , Then, there exists such that Therefore, ; hence, is an eigenvalue of

Step 3. is also feasible, since the matrix is regular. We have where

Consequently, if or if ; is regular because . Therefore, all the column vectors of are linearly independent, so is regular.

Step 4. When and are held, we calculate ; we have two cases

Case 1. for all then implies that and such we find . So, solves the equation (4) and is the solution of the linear complementarity problem .

Case 2. we assume the existence of such as and we updated the matrix and . So, we bring back to , and the matrix will be modified. Then, for all real .

The matrix will change to the matrix defined by .

Now checking the regularity of the matrix . We have, according to the formula of Sherman-Morrison,

We consider , then . Therefore, we have two possible cases (a)If we have and the function defined by satisfies Then, according to the Theorem of Intermediate Values, there exist such as , so and thus ; hence, in this case, the matrixis singular and the solution does not exist(b)If we have the matrix which is regular for then according to Sherman-Morrison’s formula

We obtain

Therefore, we conclude that the matrix plays an important role for giving an explicit calculation of at each step.

Step 5. Let us show that if , then the matrix is singular and the solution does not exist. This will be proven by showing that if is regular so for each . So, we can demonstrate that every can appear at most times ; we have two cases

Case 1. : we suppose that appears at least twice in the sequence and that and correspond to the two closest occurrences, that is to say, that there is no other occurrence of between them. So, and for and , which implies that and Then, for all . But since we obtain the relation (11) using the fact that and since ; it follows from Proposition 4 that there is an where and which implies that , which is a contradiction, so occurs at most once in the sequence.

Case 2. : let and correspond to two occurrences of , so that for and This gives that for and . Then, as the condition (11) holds because of and since then Proposition 4 implies the existence of an where and as well as so that . So as should have entered the sequence of ; there is an occurrence of some in the sequence; this means, by the assumptions, that cannot appear there more than times.

4. Numerical Examples

In this section, we demonstrate the effectiveness of our proposed algorithm in relation to the execution time and the number of iterations. To do this, we made comparisons between our algorithm and other existing methods. In the first, we give a simple example of a matrix -matrix of order 4, for which we find the solution in a short time. In the second example, we compare the results obtained by our method with those obtained by the method of El foutayeni et al., the method of Yu, and the method of Chen-Harker-Kanzow-Smale (CHKS), and in the third example, we compare the execution time of our method with the method of Lemke and the method of the interior point.

Example 10. Considering the next linear complementarity problem, where we search to determine a vector in such that and , with

It is easy to prove that the associated matrix is an -matrix, by applying the proposed algorithm. Then, we obtain and the elapsed time is 0.000547 seconds.

Example 11. In this example, we compare the results obtained with our method to those obtained with the method of El foutayeni, the method given by Yu, and the method of Chen-Harker-Kanzow-Smale (CHKS). For attending this, we adopt our MATLAB program to calculate the optimal solution , the final values , the number of iterations, and the time in seconds. Considering the following linear complementarity problem , where such as for for all and it equals to otherwise, and such as

Tables 14 present the summaries of the results obtained, where Iter represents the iteration numbers when the algorithm ends and Time indicates the total cost in seconds to resolve the problem.

From Figures 1 and 2, we can notice that our method can be comparable with the method of El foutayeni, the method of Yu, and the method of CHKS, from the iteration numbers and CPU computation time in seconds.

Example 12. In this example, we compare three different methods in order to solve a linear complementarity problem . The first one is our method, the second one is Lemke’s method, and the last one is the interior point method [30]. We take the same example, where such as for alland zero in the rest andsuch asThe matrixis definitely positive, so we ensure the convergence of Lemke’s method. In Table 5, the first column represents the dimension of the linear complementarity problem. The second provides (the third and the fourth) the computation time in seconds for Lemke’s method to be performed (interior point algorithm and our algorithm).

Based on this table, in the case where , we conclude that Lemke’s method is divergently compared to time (it needs 334 seconds to display the results), but our method only needs 0.928443 seconds to find the solution of . The same is said for the point interior method. We noticed that our algorithm is faster than the other algorithms compared to the execution time. Then, we can deduce that the performance of our method is effective.

5. Conclusion

Solving a linear LCP complementarity problem has been the goal of much research. Thus, in this article, we have proposed an algorithm allowing us to solve the LCP linear complementarity problem. This algorithm has a finite number of steps and converges to the solution. In addition, we have considered a new class of matrices called the -matrix such that the algorithm is efficient. In perspective, we seek to find a simple method to solve linear complementarity problems with any matrix and vector without treating the cases on the matrix , so that it is fast in execution time and in the number of iterations. A digital implementation of the algorithm is given in this work.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.