Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013, Article ID 956143, 9 pages
http://dx.doi.org/10.1155/2013/956143
Research Article

Convergence of the GAOR Method for One Subclass of -Matrix

Department of Mathematics, Qingdao University of Science and Technology, Qingdao 266061, China

Received 5 January 2013; Revised 12 March 2013; Accepted 14 March 2013

Academic Editor: Gerhard-Wilhelm Weber

Copyright © 2013 Guangbin Wang and Ting Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We discuss the convergence of the GAOR method to solve linear system which occurred in solving the weighted linear least squares problem. Moreover, we present one convergence theorem of the GAOR method when the coefficient matrix is a strictly doubly α diagonally dominant matrix which is a nonsingular -matrix. Finally, we show that our results are better than previous ones by using four numerical examples.

1. Introduction

Consider the weighted linear least squares problem where is the variance-covariance matrix. The problem has many scientific applications. A typical source is parameter estimation in mathematical modeling.

In order to solve it, we have to solve the following linear system: where is invertible. For example, in the variance-covariance matrix [1], a generalized SOR (GSOR) method to solve linear system (2) was proposed in [2], afterwards, a generalized AOR (GAOR) method to solve linear system (2) was established in [3] as follows: where

In [46], authors studied the convergence of the GAOR method for solving the linear system . In [4, 5], authors studied the convergence of the GAOR method for diagonally dominant coefficient matrices and gave the regions of convergence. In [6], authors studied the convergence of the GAOR method for strictly doubly diagonally dominant coefficient matrices and gave the regions of convergence. In [7], authors studied the preconditioned generalized AOR method for solving linear systems. They proposed two kinds of preconditioning that each one contains three preconditioners. They showed that the convergence rate of the preconditioned generalized AOR methods is better than that of the original method, whenever the original method is convergent. In [8], authors presented three kinds of preconditioners for preconditioned modified AOR method to solve systems of linear equations. They showed that the convergence rate of the preconditioned modified AOR method is better than that of the original method, whenever the original method is convergent.

Sometimes, the coefficient matrices of linear systems are not strictly diagonally dominant or strictly doubly diagonally dominant. In this paper, we will discuss the convergence of the GAOR method when the coefficient matrices are strictly doubly diagonally dominant.

Throughout this paper, we denote the -row sums of the modulus of the entries of and by and , the -column sums of the modulus of the entries of and by and , respectively. And we denote the spectral radius of iterative matrix by :

Definition 1 (see [9]). We call matrix a strictly diagonally dominant matrix if and denote .
We call matrix a strictly doubly diagonally dominant matrix if and denote .
We call matrix a strictly doubly diagonally dominant matrix if and denote , where .

Obviously, a strictly doubly diagonally dominant matrix is a strictly doubly diagonally dominant matrix, but not vice versa.

For example, but .

Lemma 2 (see [9]). If , then is a nonsingular -matrix.

In this paper, we study the convergence of the GAOR method for solving the linear system for strictly doubly diagonally dominant coefficient matrices. Firstly, we obtain upper bound for the spectral radius of the matrix which is the iterative matrix of the GAOR iterative method. Moreover, we present one convergence theorem of the GAOR method. Finally, we present four numerical examples.

2. Upper Bound of the Spectral Radius of

In this section, we obtain upper bound of the spectral radius of the iterative matrix .

Theorem 3. Let , then satisfies the following inequality:

Proof. Let be an arbitrary eigenvalue of the iterative matrix , then that is, If , then From Lemma 2, we know that is nonsingular, that is then is not an eigenvalue of the iterative matrix .

However, is an eigenvalue of the iterative matrix , then there exists at least a couple of , such that That is, It is easy to find that the discriminant of a curve of second order then the solution of (18) satisfies So, satisfies the following inequality:

3. Convergence of the GAOR Method

In this section, we investigate the convergence of the GAOR method to solve linear system (2). We assume that is a strictly doubly diagonally dominant coefficient matrix and obtain the regions of convergence of the GAOR method.

Theorem 4. If , then the GAOR method converge if satisfy either(Ι) , or(ΙΙ)where

Proof. Because , then the GAOR method converge if That is, Firstly, when , we have That is, Then, we have the following conditions.(1) When , then , . From we have (2) When , or , , then From , we have (3) When , it is easy to prove that the discriminant of a curve of second order of (29) , and then So, when , we have Secondly, when , we have That is, Then we have the following conditions.(1) When , then , . From inequality (37), we have From , we have (2) When , or , , then or . From inequality (37), we have so, (3) When , then , , , .
It is easy to find that the discriminant of a curve of second order of (37) . Then we have So, when <<, we have

4. Examples

In this section, we give four numerical examples to show that our results are better than previous ones.

Example 1. Let where

It is easy to know that and So

By Theorem 4, we obtain the following regions of convergence: and , or and .

In addition, .

By Theorem 2 of paper [6], we obtain the following regions of convergence: and , or and .

From Figure 1, we know that the regions of convergence got by Theorem 4 in this paper (bounded by blue lines) are larger than ones got by Theorem 2 of paper [6] (bounded by green lines). From Example 1 of paper [6], we know that the regions of convergence got by Theorem 2 of paper [6] are larger than ones of paper [4] and paper [5]. So, the regions of convergence got by Theorem 4 in this paper are larger than ones of paper [46].

956143.fig.001
Figure 1

Example 2. Let where

It is easy to know that and So,

By Theorem 4, we obtain the following regions of convergence: and , or and .But , so we cannot use Theorem 2 of paper [6]. And we cannot use the results of paper [4, 5].

Example 3. Let where

Obviously, and

By Theorem 4, we obtain the following regions of convergence: and , or and .

Consider the following linear system: where which has the exact solution

We choose the initial solution . If , we need 28 iterations to achieve four decimal digits of accuracy in the solution components. If , , we need 18 iterations to achieve four decimal digits of accuracy in the solution components. If , we need 22 iterations to achieve four decimal digits of accuracy in the solution components.

Example 4. Consider
where Obviously, , and

By Theorem 4, we obtain the following regions of convergence: and , or and .

Acknowledgments

The authors would like to thank the referees for their valuable comments and suggestions, which greatly improved the original version of this paper. This work was supported by the National Natural Science Foundation of China (No. 11001144) and the Natural Science Foundation of Shandong Province (No. ZR2012AL09).

References

  1. S. R. Searle, G. Casella, and C. E. McCulloch, Variance Components, John Wiley & Sons, New York, NY, USA, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  2. J. Y. Yuan, “Numerical methods for generalized least squares problems,” Journal of Computational and Applied Mathematics, vol. 66, pp. 571–584, 1996. View at Google Scholar
  3. J.-Y. Yuan and X.-Q. Jin, “Convergence of the generalized AOR method,” Applied Mathematics and Computation, vol. 99, no. 1, pp. 35–46, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  4. M. T. Darvishi and P. Hessari, “On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices,” Applied Mathematics and Computation, vol. 176, no. 1, pp. 128–133, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. G.-X. Tian, T.-Z. Huang, and S.-Y. Cui, “Convergence of generalized AOR iterative method for linear systems with strictly diagonally dominant matrices,” Journal of Computational and Applied Mathematics, vol. 213, no. 1, pp. 240–247, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. G. Wang, H. Wen, L. L. Li, and X. Li, “Convergence of GAOR method for doubly diagonally dominant matrices,” Applied Mathematics and Computation, vol. 217, no. 18, pp. 7509–7514, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. X. X. Zhou, Y. Z. Song, L. Wang, and Q. S. Liu, “Preconditioned GAOR methods for solving weighted linear least squares problems,” Journal of Computational and Applied Mathematics, vol. 224, no. 1, pp. 242–249, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  8. M. T. Darvishi, P. Hessari, and B.-C. Shin, “Preconditioned modified AOR method for systems of linear equations,” International Journal for Numerical Methods in Biomedical Engineering, vol. 27, no. 5, pp. 758–769, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. M. Li and Y. X. Sun, “Discussion on α-diagonally dominant matrices and their applications,” Chinese Journal of Engineering Mathematics, vol. 26, no. 5, pp. 941–945, 2009. View at Google Scholar · View at MathSciNet