Journal of Applied Mathematics

Volume 2013 (2013), Article ID 850986, 7 pages

http://dx.doi.org/10.1155/2013/850986

## Modified Preconditioned GAOR Methods for Systems of Linear Equations

School of Mathematics and Statistics, Anyang Normal University, Anyang 455000, China

Received 30 January 2013; Accepted 13 May 2013

Academic Editor: Giuseppe Marino

Copyright © 2013 Xue-Feng Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Three kinds of preconditioners are proposed to accelerate the generalized AOR (GAOR) method for the linear system from the generalized least squares problem. The convergence and comparison results are obtained. The comparison results show that the convergence rate of the preconditioned generalized AOR (PGAOR) methods is better than that of the original GAOR methods. Finally, some numerical results are reported to confirm the validity of the proposed methods.

#### 1. Introduction

Consider the generalized least squares problem where , , , and the variance-covariance matrix is a known symmetric and positive-definite matrix. This problem has many scientific applications and one of the applications is a parameter estimation in mathematical model [1, 2].

In order to solve the problem simply, one has to solve a linear system of the equivalent form as follows: where with , , and . Without loss of generality, we assume that , where is the identity matrix, and and are strictly lower and upper triangular matrices obtained from , respectively. So we can pretty easily get that

In order to get the approximate solutions of the linear system (2), a lot of iterative methods such as Jacobi, Gauss-Seidel (GS), successive over relaxation (SOR), and accelerated over relaxation (AOR) have been studied by many authors [3–8]. These iterative methods have very good results, but have a serious drawback because of computing the inverses of and in (3). To avoid this drawback, Darvishi and Hessari [9] proposed the generalized convergence of the generalized AOR (GAOR) method when the coefficient matrix is a diagonally dominant matrix. The GAOR method [10, 11] can be defined as follows: where Here, and are real parameters with . The iteration matrix is rewritten briefly as

To improve the convergence rate of the GAOR iterative method, a preconditioner should be applied. Now we can transform the original linear system (2) into the preconditioned linear system where is the preconditioner. can be expressed as Meanwhile, the PGAOR method for solving the preconditioned linear system (8) is defined by where

In this paper, we propose three new types of preconditioners and study the convergence rate of the preconditioned GAOR methods for solving the linear system (2). This paper is organized as follows. In Section 2, some notations, definitions, and preliminary results are presented. In Section 3, three new types of preconditioners are proposed and compared with that of the original GAOR methods. Lastly, a numerical example is provided to confirm the theoretical results studied in Section 4.

#### 2. Preliminaries

For vector , denotes that all components of are nonnegative (positive). For two vectors , means that . These definitions are carried immediately over to matrices. A matrix is said to be irreducible if the directed graph of is strongly connected. denotes the spectral radius of . Some useful results are provided as follows.

Lemma 1 (see [7]). *Let be an irreducible matrix. Then,*(a)*has a positive eigenvalue equal to its spectral radius. *(b)* has an eigenvector corresponding to . *(c)* is a simple eigenvalue of . *

Lemma 2 (see [12]). *Let be a nonnegative matrix. Then,*(i)*if for some nonnegative vector , , then . *(ii)*If for some positive vector , then . Moreover, if is irreducible and if for some nonnegative vector , then and is a positive vector.*

#### 3. Preconditioned GAOR Methods

To solve the linear system (2) with the coefficient matrix in (3), we consider the preconditioners as follows: where The preconditioned coefficient matrix can be expressed as whereBased on the discussed above, can be spitted as Similarly, The preconditioned GAOR methods for solving are defined by where with where For , we have

Next, we will study the convergence condition of the PGAOR methods. For simplicity, without loss of generality, we can assume that Then, we have the following theorem.

Theorem 3. *Let and be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If matrix in (3) is an irreducible matrix then it holds that ,
*

*Proof. *By some simple calculations on (7), one can get
Since here is irreducible, one can pretty easily obtain that is nonnegative and irreducible by the above assumptions. And so on, one can also easily prove that is non-negative and irreducible. By Lemma 1, there exists a positive vector such that
where .

One can easily have
That is,
With the same vector , it holds
Using (22), (26), and (28), we can obtain
Meanwhile, we have
By far, we can easily get
In view of the abovementioned assumptions, we have that
Then, if , then
From Lemma 2, we can easily get
Similarly, if , then
So we have
If , then we may get that but , which is contradictory to the fact of nonsingular matrix by assumptions; this completes the conclusion of the theorem.

Theorem 4. *Let and be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If the matrix in (3) is an irreducible matrix satisfying
**
then it holds that ,
*

*Proof. *One can easily prove this theorem by using similar arguments of Theorem 3.

Similarly, we have the following theorem.

Theorem 5. *Let and be the iteration matrices of the GAOR method and the PGAOR method corresponding to problem (2), which are defined by (7) and (22), respectively. If the matrix in (3) is an irreducible matrix satisfying
**
then it holds that ,
*

#### 4. Numerical Examples

In this section, we give numerical examples to demonstrate the conclusions drawn above. The numerical experiments were done by using MATLAB 7.0.

*Example 1. *Consider the following Laplace equation:
Under a uniform square domain, applying the five-point finite difference method with the uniform mesh size, we can get the following linear system:
whereThe coefficient matrix is spitted as
where

Table 1 reveals the spectral radii of the GAOR methods and the PGAOR methods. It tells that the spectral radii of the preconditioned PGAOR methods are smaller than those of the GAOR methods, so we can get that the proposed three preconditioners can accelerate the speed rate of the GAOR method for the linear systems (2). The results in Table 1 are in accordance with Theorems 3–5.

*Example 2. *The coefficient matrix in (3) is given by
where , , and , , with

Obviously, is irreducible. Table 2 shows the spectral radii of the corresponding iteration matrices with and .

Similarly, in Table 2, we get that the results are in concord with Theorems 3–5.

#### Acknowledgments

The authors would like to thank the anonymous referees for their helpful suggestions, which greatly improved the paper. This research of this author was supported by NSFC Tianyuan Mathematics Youth Fund (no. 11026040), by Science and Technology Development Plan of Henan Province (no. 122300410316), and by Natural Science Foundations of Henan Province (no. 13A110022).

#### References

- J. Y. Yuan, “Numerical methods for generalized least squares problem,”
*Journal of Computational and Applied Mathematics*, vol. 66, pp. 571–584, 1996. View at Google Scholar - J.-Y. Yuan and X.-Q. Jin, “Convergence of the generalized AOR method,”
*Applied Mathematics and Computation*, vol. 99, no. 1, pp. 35–46, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. D. Gunawardena, S. K. Jain, and L. Snyder, “Modified iterative methods for consistent linear systems,”
*Linear Algebra and Its Applications*, vol. 154/156, pp. 123–143, 1991. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Hadjidimos, “Accelerated overrelaxation method,”
*Mathematics of Computation*, vol. 32, no. 141, pp. 149–157, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. T. Li, C. X. Li, and S. L. Wu, “Improvement of preconditioned AOR iterative methods for L-matrices,”
*Journal of Computational and Applied Mathematics*, vol. 206, pp. 656–665, 2007. View at Google Scholar - J. P. Milaszewicz, “Improving Jacobi and Gauss-Seidel iterations,”
*Linear Algebra and Its Applications*, vol. 93, pp. 161–170, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. S. Varga,
*Matrix Iterative Analysis*, vol. 27, Springer, Berlin, Germany, 2000. View at Publisher · View at Google Scholar · View at MathSciNet - D. M. Young,
*Iterative Solution of Large Linear Systems*, Academic Press, New York, NY, USA, 1971. View at MathSciNet - M. T. Darvishi and P. Hessari, “On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices,”
*Applied Mathematics and Computation*, vol. 176, no. 1, pp. 128–133, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Zhou, Y. Song, L. Wang, and Q. Liu, “Preconditioned GAOR methods for solving weighted linear least squares problems,”
*Journal of Computational and Applied Mathematics*, vol. 224, no. 1, pp. 242–249, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. T. Darvishi, P. Hessari, and B.-C. Shin, “Preconditioned modified AOR method for systems of linear equations,”
*International Journal for Numerical Methods in Biomedical Engineering*, vol. 27, no. 5, pp. 758–769, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. Berman and R. J. Plemmons,
*Nonnegative Matrices in the Mathematical Sciences*, vol. 9, Academic Press, New York, NY, USA; Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1994. View at Publisher · View at Google Scholar · View at MathSciNet