Abstract

We suggest and analyze a residual iterative method for solving absolute value equations 𝐴𝑥|𝑥|=𝑏 where 𝐴𝑅𝑛×𝑛, 𝑏𝑅𝑛 are given and 𝑥𝑅𝑛 is unknown, using the projection technique. We also discuss the convergence of the proposed method. Several examples are given to illustrate the implementation and efficiency of the method. Comparison with other methods is also given. Results proved in this paper may stimulate further research in this fascinating field.

1. Introduction

The residual methods were proposed for solving large sparse systems of linear equations 𝐴𝑥=𝑏,(1.1) where 𝐴𝑅𝑛×𝑛 is a positive definite matrix and 𝑥,𝑏𝑅𝑛. Paige and Saunders [1] minimized the residual norm over the Krylov subspace and proposed an algorithm for solving indefinite systems. Saad and Schultz [2] used Arnoldi process and suggested generalized minimal residual method which minimize norm of the residual at each step. The residual methods have been studied extensively [35].

We show that the Petrov-Galerkin process can be extended for solving absolute value equations of the form: 𝐴𝑥|𝑥|=𝑏,(1.2) where 𝐴𝑅𝑛×𝑛, 𝑏𝑅𝑛. Here |𝑥| is the vector in 𝑅𝑛 with absolute values of the components of 𝑥 and 𝑥𝑅𝑛, is unknown. The absolute value equations (1.1) were investigated extensively in [6]. It was Managasarian [7, 8], who proved that the absolute value equations (1.2) are equivalent to the linear complementarity problems. This equivalent formulation was used by Managasarian [7, 8] to solve the absolute value equations. We would like to remark that the complementarity problems are also equivalent to the variational inequalities. Thus, we conclude that the absolute value equations are equivalent to the variational inequalities. There are several methods for solving the variational inequalities; see Noor [911], Noor et al. [12, 13] and the references therein. To the best our knowledge, this alternative equivalent formulation has not exploited up to now. This is another direction for future direction. We hope that this interlink among these fields may lead to discover novel and innovative techniques for solving the absolute value equations and related optimization problems. Noor et al. [14, 15] have suggested some iterative methods for solving absolute value equation (1.2) using minimization technique with symmetric positive definite matrix. For more details, see [3, 4, 612, 1419].

In this paper, we suggest and analyse residual iterative method for solving absolute value equations (1.2) using projection technique. Our method is easy to implement. We discuss the convergence of the residual method for nonsymmetric positive definite matrices.

We denote by 𝐾 and 𝐿 the search subspace and the constraints subspace, respectively, and let 𝑚 be their dimension and 𝑥0𝑅𝑛 an initial guess. A projection method onto the subspace 𝐾 and orthogonal to 𝐿 is a process to find an approximate solution 𝑥𝑅𝑛 to (1.2) by imposing the Petrov-Galerkin conditions that 𝑥 belong to affine space 𝑥0+𝐾 such that the new residual vector orthogonal to 𝐿, that is, nd𝑥𝑥0+𝐾suchthat𝑏(𝐴𝐷(𝑥))𝑥𝐿,(1.3) where 𝐷(𝑥) is diagonal matrix corresponding to sign(𝑥). For different choices of the subspace 𝐿, we have different iterative methods. Here we use the constraint space 𝐿=(𝐴𝐷(𝑥))𝐾. The residual method approximates the solution of (1.2) by the vector 𝑥𝑥0+𝐾that minimizes the norm of residual.

The inner product is denoted by , in the 𝑛-dimensional Euclidean space 𝑅𝑛. For 𝑥𝑅𝑛, sign(𝑥) will denote a vector with components equal to 1,0,1 depending on whether the corresponding component of 𝑥 is positive, zero, or negative. The diagonal matrix 𝐷(𝑥) corresponding to sign(𝑥) is defined as𝐷(𝑥)=𝜕|𝑥|=diag(sign(𝑥)),(1.4) where 𝜕|𝑥| represent the generalized Jacobean of |𝑥| based on a subgradient [20, 21].

We denote the following by 𝑎=𝐶𝑣1,𝐶𝑣1,𝑐=𝐶𝑣1,𝐶𝑣2,𝑑=𝐶𝑣2,𝐶𝑣2𝑝,1=𝑏𝐴𝑥𝑘+||𝑥𝑘||,𝐶𝑣1=𝑏𝐶𝑥𝑘,𝐶𝑣1𝑝,2=𝑏𝐴𝑥𝑘+||𝑥𝑘||,𝐶𝑣2=𝑏𝐶𝑥𝑘,𝐶𝑣2,(1.5) where 0𝑣1, 𝑣2𝑅𝑛, and 𝐶=𝐴𝐷(𝑥𝑘). We consider 𝐴 such that 𝐶 is a positive definite matrix. We remark that 𝐷(𝑥𝑘)𝑥𝑘=|𝑥𝑘|.

2. Residual Iterative Method

Consider the iterative scheme of the type: 𝑥𝑘+1=𝑥𝑘+𝛼𝑣1+𝛽𝑣2,0𝑣1,𝑣2𝑅𝑛,𝑘=0,1,2,.(2.1) These vectors can be chosen by different ways. To derive residual method for solving absolute value equations in the first step, we choose the subspace𝐾1𝑣=span1,𝐿1=span𝐶𝑣1,𝑥0=𝑥𝑘.(2.2) For 𝐷(̃𝑥𝑘+1)=𝐷(𝑥𝑘), we write the residual in the following form: 𝑏𝐴̃𝑥𝑘+1+||̃𝑥𝑘+1||=𝑏𝐴𝐷̃𝑥𝑘+1̃𝑥𝑘+1𝑥=𝑏𝐴𝐷𝑘̃𝑥𝑘+1=𝑏𝐶̃𝑥𝑘+1.(2.3) From (1.3) and (2.3), we calculate ̃𝑥𝑘+1𝑥𝑘+𝐾1suchthat𝑏𝐶̃𝑥𝑘+1𝐿1;(2.4) that is, we find the approximate solution by the iterative scheme ̃𝑥𝑘+1=𝑥𝑘+𝛼𝑣1.(2.5) Now, we rewrite (2.4) in the inner product as 𝑏𝐶̃𝑥𝑘+1,𝐶𝑣1=0;(2.6) from the above discussion, we have 𝑏𝐶𝑥𝑘𝛼𝐶𝑣1,𝐶𝑣1=𝑏𝐶𝑥𝑘,𝐶𝑣1𝛼𝐶𝑣1,𝐶𝑣1=𝑝1𝑎𝛼=0,(2.7) from which we have 𝑝𝛼=1𝑎.(2.8) The next step is to choose the subspace 𝐾2𝑣=span2,𝐿2=span𝐶𝑣2,𝑥0=̃𝑥𝑘+1,(2.9) and to find the approximate solution 𝑥𝑘+1 such that 𝑥𝑘+1̃𝑥𝑘+1+𝐾2suchthat𝑏𝐶𝑥𝑘+1𝐿2,(2.10) where 𝑥𝑘+1=̃𝑥𝑘+1+𝛽𝑣2,𝑏𝐴𝑥𝑘+1+||𝑥𝑘+1||=𝑏𝐶𝑥𝑘+1𝑥,𝐷𝑘+1𝑥=𝐷𝑘.(2.11) Rewriting (2.10) in terms of the inner product, we have𝑏𝐶𝑥𝑘+1,𝐶𝑣2=0.(2.12) Thus, we have𝑏𝐶𝑥𝑘+1,𝐶𝑣2=𝑏𝐶𝑥𝑘𝛼𝐶𝑣1𝛽𝐶𝑣2,𝐶𝑣2=𝑏𝐶𝑥𝑘,𝐶𝑣2𝛼𝐶𝑣1,𝐶𝑣2𝛽𝐶𝑣2,𝐶𝑣2=𝑝2𝑐𝛼𝑑𝛽=0.(2.13) From (2.8) and (2.13), we obtain𝛽=𝑎𝑝2𝑐𝑝1𝑎𝑑.(2.14)

We remark that one can choose 𝑣1=𝑟𝑘 and 𝑣2 in different ways. However, we consider the case 𝑣2=𝑠𝑘 (𝑠𝑘 is given in Algorithm 2.1).

Based upon the above discussion, we suggest and analyze the following iterative method for solving the absolute value equations (1.2) and this is the main motivation of this paper.

Algorithm 2.1. Choose an initial guess 𝑥0𝑅𝑛,For 𝑘=0,1,2, until convergence do𝑟𝑘=𝑏𝐴𝑥𝑘+|𝑥𝑘|𝑔𝑘=(𝐴𝐷(𝑥𝑘))𝑇(𝐴𝑥𝑘|𝑥𝑘|𝑏)𝐻𝑘=((𝐴𝐷(𝑥𝑘))1(𝐴𝐷(𝑥𝑘)))𝑇𝑠𝑘=𝐻𝑘𝑔𝑘If 𝑟𝑘=0, then stop; else𝛼𝑘=𝑝1/𝑎,𝛽𝑘=(𝑎𝑝2𝑐𝑝1)/𝑎𝑑Set 𝑥𝑘+1=𝑥𝑘+𝛼𝑘𝑟𝑘+𝛽𝑘𝑠𝑘 If 𝑥𝑘+1𝑥𝑘<106then stopEnd ifEnd for 𝑘.

If 𝛽=0, then Algorithm 2.1 reduces to minimal residual method; see [2, 5, 21, 22]. For the convergence analysis of Algorithm 2.1, we need the following result.

Theorem 2.2. Let {𝑥𝑘} and {𝑟𝑘} be generated by Algorithm 2.1; if 𝐷(𝑥𝑘+1)=𝐷(𝑥𝑘), then 𝑟𝑘2𝑟𝑘+12=𝑝21𝑎+𝑎𝑝2𝑐𝑝12𝑎2𝑑,(2.15) where 𝑟𝑘+1=𝑏𝐴𝑥𝑘+1+|𝑥𝑘+1| and 𝐷(𝑥𝑘+1)=diag(sign(𝑥𝑘+1)).

Proof. Using (2.1), we obtain 𝑟𝑘+1=𝑏𝐴𝑥𝑘+1+||𝑥𝑘+1||𝑥=𝑏𝐴𝐷𝑘+1𝑥𝑘+1𝑥=𝑏𝐴𝐷𝑘𝑥𝑘+1𝑥=𝑏𝐴𝐷𝑘𝑥𝑘𝑥𝛼𝐴𝐷𝑘𝑣1𝑥𝛽𝐴𝐷𝑘𝑣2=𝑏𝐴𝑥𝑘+||𝑥𝑘||𝛼𝐶𝑣1𝛽𝐶𝑣2=𝑟𝑘𝛼𝐶𝑣1𝛽𝐶𝑣2.(2.16) Now consider 𝑟𝑘+12=𝑟𝑘+1,𝑟𝑘+1=𝑟𝑘𝛼𝐶𝑣1𝛽𝐶𝑣2,𝑟𝑘𝛼𝐶𝑣1𝛽𝐶𝑣2=𝑟𝑘,𝑟𝑘2𝛼𝑟𝑘,𝐶𝑣12𝛼𝛽𝐶𝑣1,𝐶𝑣22𝛽𝑟𝑘,𝐶𝑣2+𝛼2𝐶𝑣1,𝐶𝑣1+𝛽2𝐶𝑣2,𝐶𝑣2=𝑟𝑘22𝛼𝑝1+2𝑐𝛼𝛽2𝛽𝑝2+𝑎𝛼2+𝛽2𝑑.(2.17) From (2.8), (2.14), and (2.17), we have 𝑟𝑘2𝑟𝑘+12=𝑝21𝑎+𝑎𝑝2𝑐𝑝12𝑎2𝑑,(2.18) the required result (2.15).

Since 𝑝21/𝑎+(𝑎𝑝2𝑐𝑝1)2/𝑎2𝑑0, so from (2.18) we have 𝑟𝑘2𝑟𝑘+12=𝑝21𝑎+𝑎𝑝2𝑐𝑝12𝑎2𝑑0.(2.19) From (2.19) we have 𝑟𝑘+12𝑟𝑘2. For any arbitrary vectors 0𝑣1,𝑣2𝑅𝑛, 𝛼,𝛽 are defined by (2.8), and (2.14) minimizes norm of the residual.

We now consider the convergence criteria of Algorithm 2.1, and it is the motivation of our next result.

Theorem 2.3. If 𝐶 is a positive definite matrix, then the approximate solution obtained from Algorithm 2.1 converges to the exact solution of the absolute value equations (1.2).

Proof. From (2.15), we have 𝑟𝑘2𝑟𝑘+12𝑝21𝑎=𝑟𝑘,𝐶𝑟𝑘2𝐶𝑟𝑘,𝐶𝑟𝑘𝜆2min𝑟𝑘4𝜆2max𝑟𝑘2=𝜆2min𝜆2max𝑟𝑘2.(2.20) This means that the sequence 𝑟𝑘2 is decreasing and bounded. Thus the above sequence is convergent which implies that the left-hand side tends to zero. Hence 𝑟𝑘2 tends to zero, and the proof is complete.

3. Numerical Results

To illustrate the implementation and efficiency of the proposed method, we consider the following examples. All the experiments are performed with Intel(R) Core(TM) 2 × 2.1 GHz, 1 GB RAM, and the codes are written in Mat lab 7.

Example 3.1. Consider the ordinary differential equation: 𝑑2𝑥𝑑𝑡2|𝑥|=1𝑡2,0𝑡1,𝑥(0)=1𝑥(1)=0.(3.1) We discredited the above equation using finite difference method to obtain the system of absolute value equations of the type: 𝐴𝑥|𝑥|=𝑏,(3.2) where the system matrix 𝐴 of size 𝑛=10 is given by 𝑎𝑖,𝑗=242,for𝑗=𝑖,121,for𝑗=𝑖+1,𝑖=1,2,,𝑛1,𝑗=𝑖1,𝑖=2,3,,𝑛,0,otherwise.(3.3) The exact solution is 𝑥=.1915802528sin𝑡4cos𝑡+3𝑡2,𝑥<0,1.462117157𝑒𝑡0.5378828428𝑒𝑡+1+𝑡2,𝑥>0.(3.4) In Figure 1, we compare residual method with Noor et al. [14, 15]. The residual iterative method, minimization method [14], and the iterative method [10] solve (3.1) in 51, 142, and 431 iterations, respectively. For the next two examples, we interchange 𝑣1,𝑣2 with each other as Algorithm 2.1 converges for nonzero vectors 𝑣1,𝑣2𝑅𝑛.

Example 3.2 (see [17]). We first chose a random 𝐴 from a uniform distribution on [10,10], then we chose a random 𝑥 from a uniform distribution on [−1, 1]. Finally we computed 𝑏=𝐴𝑥|𝑥|. We ensured that the singular values of each 𝐴 exceeded 1 by actually computing the minimum singular value and rescaling 𝐴 by dividing it by the minimum singular value multiplied by a random number in the interval [0, 1]. The computational results are given in Table 1.

In Table 1, GNM and RIM denote generalized Newton method [17] and residual iterative method. From Table 1 we conclude that residual method for solving absolute value equations (1.2) is more effective.

Example 3.3 (see [23]). Consider random matrix 𝐴 and 𝑏 in Mat lab code as 𝑛=input("dimensionofmatrix𝐴=");rand("state",0);𝑅=rand(𝑛,𝑛);𝑏=rand(𝑛,1);𝐴=𝑅Runeye(𝑛),(3.5) with random initial guess. The comparison between the residual iterative method and the Yong method [23] is presented in Table 2.

In Table 2 TOC denotes time taken by CPU. Note that for large problem sizes the residual iterative method converges faster than the Yong method [23].

4. Conclusions

In this paper, we have used the projection technique to suggest an iterative method for solving the absolute value equations. The convergence analysis of the proposed method is also discussed. Some examples are given to illustrate the efficiency and implementation of the new iterative method. The extension of the proposed iterative method for solving the general absolute value equation of the form 𝐴𝑥+𝐵|𝑥|=𝑏 for suitable matrices is an open problem. We have remarked that the variational inequalities are also equivalent to the absolute value equations. This equivalent formulation can be used to suggest and analyze some iterative methods for solving the absolute value equations. It is an interesting and challenging problem to consider the variational inequalities for solving the absolute value equations.

Acknowledgments

This research is supported by the Visiting Professor Program of the King Saud University, Riyadh, Saudi Arabia, and Research Grant no. KSU.VPP.108. The authors are also grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing the excellent research facilities.