Journal Menu
`Abstract and Applied AnalysisVolumeΒ 2012Β (2012), Article IDΒ 406232, 9 pageshttp://dx.doi.org/10.1155/2012/406232`
Research Article

## Residual Iterative Method for Solving Absolute Value Equations

1Mathematics Department, COMSATS Institute of Information Technology, Park Road, Islamabad, Pakistan
2Mathematics Department, College of Science, King Saud University, Riyadh, Saudi Arabia

Received 30 November 2011; Accepted 13 December 2011

Academic Editor: Khalida InayatΒ Noor

Copyright Β© 2012 Muhammad Aslam Noor et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We suggest and analyze a residual iterative method for solving absolute value equations where , are given and is unknown, using the projection technique. We also discuss the convergence of the proposed method. Several examples are given to illustrate the implementation and efficiency of the method. Comparison with other methods is also given. Results proved in this paper may stimulate further research in this fascinating field.

#### 1. Introduction

The residual methods were proposed for solving large sparse systems of linear equations where is a positive definite matrix and . Paige and Saunders [1] minimized the residual norm over the Krylov subspace and proposed an algorithm for solving indefinite systems. Saad and Schultz [2] used Arnoldi process and suggested generalized minimal residual method which minimize norm of the residual at each step. The residual methods have been studied extensively [3β5].

We show that the Petrov-Galerkin process can be extended for solving absolute value equations of the form: where , . Here is the vector in with absolute values of the components of and , is unknown. The absolute value equations (1.1) were investigated extensively in [6]. It was Managasarian [7, 8], who proved that the absolute value equations (1.2) are equivalent to the linear complementarity problems. This equivalent formulation was used by Managasarian [7, 8] to solve the absolute value equations. We would like to remark that the complementarity problems are also equivalent to the variational inequalities. Thus, we conclude that the absolute value equations are equivalent to the variational inequalities. There are several methods for solving the variational inequalities; see Noor [9β11], Noor et al. [12, 13] and the references therein. To the best our knowledge, this alternative equivalent formulation has not exploited up to now. This is another direction for future direction. We hope that this interlink among these fields may lead to discover novel and innovative techniques for solving the absolute value equations and related optimization problems. Noor et al. [14, 15] have suggested some iterative methods for solving absolute value equation (1.2) using minimization technique with symmetric positive definite matrix. For more details, see [3, 4, 6β12, 14β19].

In this paper, we suggest and analyse residual iterative method for solving absolute value equations (1.2) using projection technique. Our method is easy to implement. We discuss the convergence of the residual method for nonsymmetric positive definite matrices.

We denote by and the search subspace and the constraints subspace, respectively, and let be their dimension and an initial guess. A projection method onto the subspace and orthogonal to is a process to find an approximate solution to (1.2) by imposing the Petrov-Galerkin conditions that belong to affine space such that the new residual vector orthogonal to , that is, where is diagonal matrix corresponding to . For different choices of the subspace , we have different iterative methods. Here we use the constraint space . The residual method approximates the solution of (1.2) by the vector that minimizes the norm of residual.

The inner product is denoted by in the -dimensional Euclidean space . For , will denote a vector with components equal to depending on whether the corresponding component of is positive, zero, or negative. The diagonal matrix corresponding to is defined as where represent the generalized Jacobean of based on a subgradient [20, 21].

We denote the following by where , , and . We consider such that is a positive definite matrix. We remark that .

#### 2. Residual Iterative Method

Consider the iterative scheme of the type: These vectors can be chosen by different ways. To derive residual method for solving absolute value equations in the first step, we choose the subspace For , we write the residual in the following form: From (1.3) and (2.3), we calculate that is, we find the approximate solution by the iterative scheme Now, we rewrite (2.4) in the inner product as from the above discussion, we have from which we have The next step is to choose the subspace and to find the approximate solution such that where Rewriting (2.10) in terms of the inner product, we have Thus, we have From (2.8) and (2.13), we obtain

We remark that one can choose and in different ways. However, we consider the case ( is given in Algorithm 2.1).

Based upon the above discussion, we suggest and analyze the following iterative method for solving the absolute value equations (1.2) and this is the main motivation of this paper.

Algorithm 2.1. Choose an initial guess ,For until convergence doIf , then stop; elseSet If then stopEnd ifEnd for .

If , then Algorithm 2.1 reduces to minimal residual method; see [2, 5, 21, 22]. For the convergence analysis of Algorithm 2.1, we need the following result.

Theorem 2.2. Let and be generated by Algorithm 2.1; if , then where and .

Proof. Using (2.1), we obtain Now consider From (2.8), (2.14), and (2.17), we have the required result (2.15).

Since , so from (2.18) we have From (2.19) we have . For any arbitrary vectors , are defined by (2.8), and (2.14) minimizes norm of the residual.

We now consider the convergence criteria of Algorithm 2.1, and it is the motivation of our next result.

Theorem 2.3. If is a positive definite matrix, then the approximate solution obtained from Algorithm 2.1 converges to the exact solution of the absolute value equations (1.2).

Proof. From (2.15), we have This means that the sequence is decreasing and bounded. Thus the above sequence is convergent which implies that the left-hand side tends to zero. Hence tends to zero, and the proof is complete.

#### 3. Numerical Results

To illustrate the implementation and efficiency of the proposed method, we consider the following examples. All the experiments are performed with Intel(R) Core(TM) 2 Γ 2.1βGHz, 1βGB RAM, and the codes are written in Mat lab 7.

Example 3.1. Consider the ordinary differential equation: We discredited the above equation using finite difference method to obtain the system of absolute value equations of the type: where the system matrix of size is given by The exact solution is In Figure 1, we compare residual method with Noor et al. [14, 15]. The residual iterative method, minimization method [14], and the iterative method [10] solve (3.1) in 51, 142, and 431 iterations, respectively. For the next two examples, we interchange with each other as Algorithm 2.1 converges for nonzero vectors .

Figure 1

Example 3.2 (see [17]). We first chose a random from a uniform distribution on , then we chose a random from a uniform distribution on [β1, 1]. Finally we computed . We ensured that the singular values of each exceeded 1 by actually computing the minimum singular value and rescaling by dividing it by the minimum singular value multiplied by a random number in the interval [0, 1]. The computational results are given in Table 1.

Table 1

In Table 1, GNM and RIM denote generalized Newton method [17] and residual iterative method. From Table 1 we conclude that residual method for solving absolute value equations (1.2) is more effective.

Example 3.3 (see [23]). Consider random matrix and in Mat lab code as with random initial guess. The comparison between the residual iterative method and the Yong method [23] is presented in Table 2.

Table 2

In Table 2 TOC denotes time taken by CPU. Note that for large problem sizes the residual iterative method converges faster than the Yong method [23].

#### 4. Conclusions

In this paper, we have used the projection technique to suggest an iterative method for solving the absolute value equations. The convergence analysis of the proposed method is also discussed. Some examples are given to illustrate the efficiency and implementation of the new iterative method. The extension of the proposed iterative method for solving the general absolute value equation of the form for suitable matrices is an open problem. We have remarked that the variational inequalities are also equivalent to the absolute value equations. This equivalent formulation can be used to suggest and analyze some iterative methods for solving the absolute value equations. It is an interesting and challenging problem to consider the variational inequalities for solving the absolute value equations.

#### Acknowledgments

This research is supported by the Visiting Professor Program of the King Saud University, Riyadh, Saudi Arabia, and Research Grant no. KSU.VPP.108. The authors are also grateful to Dr. S. M. Junaid Zaidi, Rector, COMSATS Institute of Information Technology, Pakistan, for providing the excellent research facilities.

#### References

1. C. C. Paige and M. A. Saunders, βSolutions of sparse indefinite systems of linear equations,β SIAM Journal on Numerical Analysis, vol. 12, no. 4, pp. 617β629, 1975.
2. Y. Saad and M. H. Schultz, βGMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,β Tech. Rep. 254, Yale University, 1983.
3. O. Axelsson, βConjugate gradient type methods for unsymmetric and inconsistent systems of linear equations,β Linear Algebra and Its Applications, vol. 29, pp. 1β16, 1980.
4. K. C. Jea and D. M. Young, βGeneralized conjugate-gradient acceleration of nonsymmetrizable iterative methods,β Linear Algebra and Its Applications, vol. 34, pp. 159β194, 1980.
5. Y. Saad, βKrylov subspace methods for solving large unsymmetric linear systems,β Mathematics of Computation, vol. 37, no. 155, pp. 105β126, 1981.
6. O. L. Mangasarian and R. R. Meyer, βAbsolute value equations,β Linear Algebra and Its Applications, vol. 419, no. 2-3, pp. 359β367, 2006.
7. O. L. Mangasarian, βAbsolute value programming,β Computational Optimization and Applications, vol. 36, no. 1, pp. 43β53, 2007.
8. O. L. Mangasarian, βAbsolute value equation solution via concave minimization,β Optimization Letters, vol. 1, no. 1, pp. 3β8, 2007.
9. M. A. Noor, βGeneral variational inequalities,β Applied Mathematics Letters, vol. 1, no. 2, pp. 119β122, 1988.
10. M. A. Aslam Noor, βSome developments in general variational inequalities,β Applied Mathematics and Computation, vol. 152, no. 1, pp. 199β277, 2004.
11. M. A. Noor, βExtended general variational inequalities,β Applied Mathematics Letters, vol. 22, no. 2, pp. 182β186, 2009.
12. M. A. Noor, K. I. Noor, and T. M. Rassias, βSome aspects of variational inequalities,β Journal of Computational and Applied Mathematics, vol. 47, no. 3, pp. 285β312, 1993.
13. M. A. Noor, K. I. Noor, and E. Al-Said, βIterative methods for solving nonconvex equilibrium problems,β Applied Mathematics & Information Sciences, vol. 6, no. 1, pp. 65β69, 2012.
14. M. A. Noor, J. Iqbal, S. Khattri, and E. Al-Said, βA new iterative method for solving absolute value equations,β International Journal of Physical Sciences, vol. 6, pp. 1793β1797, 2011.
15. M. A. Noor, J. Iqbal, K. I. Noor, and E. Al-Said, βOn an iterative method for solving absolute value equations,β Optimization Letters. In press.
16. Y.-F. Jing and T.-Z. Huang, βOn a new iterative method for solving linear systems and comparison results,β Journal of Computational and Applied Mathematics, vol. 220, no. 1-2, pp. 74β84, 2008.
17. O. L. Mangasarian, βA generalized Newton method for absolute value equations,β Optimization Letters, vol. 3, no. 1, pp. 101β108, 2009.
18. O. L. Mangasarian, βSolution of symmetric linear complementarity problems by iterative methods,β Journal of Optimization Theory and Applications, vol. 22, no. 4, pp. 465β485, 1977.
19. O. L. Mangasarian, βThe linear complementarity problem as a separable bilinear program,β Journal of Global Optimization, vol. 6, no. 2, pp. 153β161, 1995.
20. R. T. Rockafellar, βNew applications of duality in convex programming,β in Proceedings of the 4th Conference On Probability, Brasov, Romania, 1971.
21. J. Rohn, βA theorem of the alternatives for the equation $Ax+B\left|x\right|=b$,β Linear and Multilinear Algebra, vol. 52, no. 6, pp. 421β426, 2004.
22. Y. Saad, Iterative Methods For Sparse Linear Systems, The PWS, Boston, Mass, USA, 2nd edition, 1996.
23. L. Yong, βParticle Swarm Optimization for absolute value equations,β Journal of Computational Information Systems, vol. 6, no. 7, pp. 2359β2366, 2010.