Journal of Applied Mathematics

Journal of Applied Mathematics / 2014 / Article
Special Issue

Numerical Methods of Complex Valued Linear Algebraic System

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 725360 | 6 pages | https://doi.org/10.1155/2014/725360

A New Version of the Accelerated Overrelaxation Iterative Method

Academic Editor: Shuqian Shen
Received22 May 2014
Revised08 Jul 2014
Accepted09 Jul 2014
Published26 Aug 2014

Abstract

Hadjidimos (1978) proposed a classical accelerated overrelaxation (AOR) iterative method to solve the system of linear equations, and discussed its convergence under the conditions that the coefficient matrices are irreducible diagonal dominant, L-matrices, and consistently orders matrices. In this paper, a new version of the AOR method is presented. Some convergence results are derived when the coefficient matrices are irreducible diagonal dominant, H-matrices, symmetric positive definite matrices, and L-matrices. A relational graph for the new AOR method and the original AOR method is presented. Finally, a numerical example is presented to illustrate the efficiency of the proposed method.

1. Introduction

Consider the following linear system: where , are given and is unknown. System of form (1) appears in many applications such as linear elasticity, fluid dynamics, and constrained quadratic programming [14]. When the coefficient matrix of the linear system (1) is large and sparse, iterative methods are recommended against direct methods. In order to solve (1) more effectively by using the iterative methods, usually, efficient splittings of the coefficient matrix are required. For example, the classical Jacobi and Gauss-Seidel iterations are obtained by splitting the matrix into its diagonal and off-diagonal parts.

For the numerical solution of (1), the accelerated overrelaxation (AOR) method was introduced by Hadjidimos in [5] and is a two-parameter generalization of the successive overrelaxation (SOR) method. In certain cases the AOR method has better convergence rate than Jacobi, JOR, Gauss-Seidel, or SOR method [5, 6]. Sufficient conditions for the convergence of the AOR method have been considered by many authors including [614]. To improve the convergence rate of the AOR method, the preconditioned AOR (PAOR) method has been considered by many authors including [1521]. Although Krylov subspace methods [4, 22] are considered as one kind of the important and efficient iterative techniques for solving the large sparse linear systems because these methods are cheap to be implemented and are able to fully exploit the sparsity of the coefficient matrix, Krylov subspace methods are very slow or even fail to converge when the coefficient matrix of (1) is often extremely ill-conditioned and highly indefinite.

The purpose of this paper is to present a new version of the accelerated overrelaxation (AOR) method for the linear system (1), which is called the quasi accelerated overrelaxation (QAOR) method. We discuss some sufficient conditions for the convergence of the QAOR method when the coefficient matrices are irreducible diagonal dominant, -matrices, symmetric positive definite matrices, and -matrices.

The remainder of the paper is organized as follows. In Section 2 the QAOR method is derived. In Section 3, some convergence results are given for the QAOR method when the coefficient matrices are irreducible diagonal dominant, -matrices, symmetric positive definite matrices, and -matrices. A relational graph for QAOR and AOR is presented in Section 4. Finally, in Section 5 a numerical example is presented to illustrate the efficiency of the proposed method.

2. The QAOR Method

To introduce the QAOR method, firstly, a brief review of the classical AOR method is required.

For any splitting, with , the basic iterative method for solving (1) is Let where is a nonsingular diagonal matrix and and are strictly lower and upper triangular matrices, respectively. Then the classical AOR method in [5] is defined: where is an acceleration parameter and is an overrelaxation parameter. Its iterative matrix is where and . Obviously, the iterative matrix of the Jacobi method is , the iterative matrix of the Gauss-Seidel method is , and the iterative matrix of the successive overrelaxation (SOR) method is .

In fact, if we introduce matrices then Therefore, one can readily verify that the AOR method can be induced by the matrix splitting .

To establish the QAOR method, we consider the following matrix splitting of the coefficient matrix ; that is to say, Then Based on the above matrix splitting (8), the QAOR method is defined as follows: and its iterative matrix is

Comparing the QAOR method with the AOR method, it is easy to see that the iteration matrix of the QAOR method is similar to that of the AOR method. Based on this fact, the QAOR method may conserve all the advantages of the AOR method. If , the QAOR reduces to the QSOR method. The QSOR method is called the KSOR method as well [23, 24].

Next, we will discuss some sufficient conditions for the convergence of the QAOR method when the coefficient matrices are irreducible diagonal dominant, -matrices, symmetric positive definite matrices, and -matrices.

3. Main Results

When is an irreducible matrix with weak diagonal dominance, obviously, both the coefficient matrix and the corresponding diagonal matrix are nonsingular. Based on this case, we have the following theorem for the QAOR method.

Theorem 1. If is an irreducible matrix with weak diagonal dominance, then the QAOR method converges for all and .

Proof. We assume that for the eigenvalue of we have . For this eigenvalue the relationship below holds: By performing a simple series of transformations, we have where The coefficients of and in (14) are less than one in modulus. To prove this it is sufficient and necessary to prove that If where and are real with , then the first inequality in (15) is equivalent to which holds for (in this case, obviously, ). Since , (16) holds for all real if and only if it holds for . Thus, (16) is equivalent to which is true. The second inequality in (15) is equivalent to which, for the same reason, must be satisfied for . Thus, we have which is also true. That is, for all and , is nonsingular which contradicts with . Therefore, .

When is an -matrix, it follows that with

Theorem 2. If is an -matrix and , then the QAOR method converges.

Proof. Let . Then Let . Then Obviously, we have which implies if and only if is a monotone matrix. Since is an -matrix, then is a monotone matrix. Therefore it is completed.

Let When is symmetric positive definite, obviously, is nonsingular. It is easy to see that In this case, the QAOR method converges if is positive definite [2]. By the simple computations, we have That is to say, the QAOR method converges if is positive definite. Let be eigenvalues of . The left in (29) is positive definite if and only if Since and are similar, then both have the same eigenvalues. Let . If the following inequality is satisfied then the QAOR method converges. Therefore, we have the following theorem.

Theorem 3. Assume that is symmetric positive definite. Let be eigenvalues of , , and . If then the QAOR method converges.

When is an -matrix, the following theorem is derived.

Theorem 4. If is an -matrix and , then the QAOR method converges for .

Proof. Assume that . Based on our assumptions, we easily get that Thus, for the iteration matrix we have That is, is a nonnegative matrix. If is the corresponding eigenvector, we have which is equivalent to From (37), we have Obviously, . Therefore, Combining (38) with (39), we have By simple manipulation, we have If , then which implies so that if then so does the QAOR method.

Further, we have the following theorem.

Theorem 5. Let be an L-matrix and . If , then

Proof. Based on Theorem 4, obviously, here it is need to prove Let . There exists a nonzero vector such that which is equivalent to Let Obviously, . Since , we have which implies . Therefore, we have That is to say, which is completed.

Some remarks on (43) are given as follows.(i)Obviously, . If , then .(ii)If , then . In fact, we have (iii) if and only if . In this case, from (43) we have (iv) if and only if .

4. A Relational Graph for QAOR and AOR

Based on the above discussion, we have Let and . Therefore, we have That is to say, when and , the QAOR method reduces to the AOR method. Based on this case, Figure 1 describes the relationship between the QAOR method and the AOR method.

5. Numerical Example

Now let us consider the following example to assess the feasibility and effectiveness of the QAOR iteration method. Suppose that and the coefficient matrix of (1) is given by

The initial guess for all tests is zero. The tests are performed in MATLAB 7.0. In Tables 1 and 2, we list the value of the spectral radius of iterative matrix, the iteration numbers (IT), and CPU's time (CPU) with the different value of and when the QAOR (QSOR) iteration is used to solve the linear system (1).


ITCPU

80.90.60.575630.015
160.950.20.71881020.056
200.90.30.80681550.084
250.80.40.91983850.156


ITCPU

80.90.5726620.015
160.950.6977940.035
200.90.7891410.076
250.80.91233520.098

From Tables 1 and 2, the iteration numbers and CPU’s time of QSOR are less than those of QAOR. That is to say, the QAOR iteration is not much better than the QSOR iteration under certain conditions.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by NSFC (no. 11301009), by Science & Technology Development Plan of Henan Province (no. 122300410316), and in part by the Natural Science Foundations of Henan Province (no. 13A110022).

References

  1. R. S. Varga, Matrix Iterative Analysis, Springer, Berlin, Germany, 2000.
  2. D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, NY, USA, 1971. View at: MathSciNet
  3. A. Berman and R. J. Plemons, Nonnegative Matrices in the Mathematics Sciences, SIAM, Philadelphia, Pa, USA, 1994.
  4. Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing, Boston, Mass, USA, 1996.
  5. A. Hadjidimos, “Accelerated overrelaxation method,” Mathematics of Computation, vol. 32, no. 141, pp. 149–157, 1978. View at: Google Scholar | Zentralblatt MATH | MathSciNet
  6. G. Avdelas and A. Hadjidimos, “Some theoretical and computational results concerning the accelerated overrelaxation (AOR) method,” L'Analyse Numérique et la Théorie de L'approximation, vol. 9, no. 1, pp. 5–10, 1980. View at: Google Scholar | MathSciNet
  7. A. K. Yeyios, “A necessary condition for the convergence of the accelerated overrelaxation (AOR) method,” Journal of Computational and Applied Mathematics, vol. 26, no. 3, pp. 371–373, 1989. View at: Publisher Site | Google Scholar | MathSciNet
  8. Z. Bai, “The monotone convergence rate of the parallel nonlinear AOR method,” Computers & Mathematics with Applications, vol. 31, no. 7, pp. 1–8, 1996. View at: Publisher Site | Google Scholar | MathSciNet
  9. Z.-Z. Bai, “Parallel nonlinear AOR method and its convergence,” Computers & Mathematics with Applications, vol. 31, no. 2, pp. 21–31, 1996. View at: Publisher Site | Google Scholar | MathSciNet
  10. Z. Bai, “Asynchronous multisplitting AOR methods for a class of systems of weakly nonlinear equations,” Applied Mathematics and Computation, vol. 98, no. 1, pp. 49–59, 1999. View at: Publisher Site | Google Scholar | MathSciNet
  11. L. Cvetković and V. Kostić, “A note on the convergence of the AOR method,” Applied Mathematics and Computation, vol. 194, no. 2, pp. 394–399, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  12. Z. X. Gao and T. Z. Huang, “Convergence of AOR method,” Applied Mathematics and Computation, vol. 176, no. 1, pp. 134–140, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  13. J.-Y. Yuan and X.-Q. Jin, “Convergence of the generalized AOR method,” Applied Mathematics and Computation, vol. 99, no. 1, pp. 35–46, 1999. View at: Publisher Site | Google Scholar | MathSciNet
  14. W. Li and W.-W. Sun, “Comparison results for parallel multisplitting methods with applications to AOR methods,” Linear Algebra and Its Applications, vol. 331, no. 1–3, pp. 131–144, 2001. View at: Publisher Site | Google Scholar | MathSciNet
  15. J. H. Yun, “Comparison results of the preconditioned AOR methods for L-matrices,” Applied Mathematics and Computation, vol. 218, no. 7, pp. 3399–3413, 2011. View at: Publisher Site | Google Scholar | MathSciNet
  16. H.-J. Wang and Y.-T. Li, “A new preconditioned AOR iterative method for L-matrices,” Journal of Computational and Applied Mathematics, vol. 229, no. 1, pp. 47–53, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  17. S. Wu and T. Huang, “A modified AOR-type iterative method for L-matrix linear systems,” The ANZIAM Journal, vol. 49, no. 2, pp. 281–292, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  18. Y.-T. Li, C.-X. Li, and S.-L. Wu, “Improvements of preconditioned AOR iterative method for L-matrices,” Journal of Computational and Applied Mathematics, vol. 206, no. 2, pp. 656–665, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  19. Y.-T. Li, C.-X. Li, and S.-L. Wu, “Improving AOR method for consistent linear systems,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 379–388, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  20. L. Wang and Y.-Z. Song, “Preconditioned AOR iterative methods for M-matrices,” Journal of Computational and Applied Mathematics, vol. 226, no. 1, pp. 114–124, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  21. M. Wu, L. Wang, and Y. Song, “Preconditioned AOR iterative method for linear systems,” Applied Numerical Mathematics, vol. 57, no. 5–7, pp. 672–685, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  22. Z. Bai, “Sharp error bounds of some Krylov subspace methods for non-Hermitian linear systems,” Applied Mathematics and Computation, vol. 109, no. 2-3, pp. 273–285, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  23. I. K. Youssef, “On the successive overrelaxation method,” Journal of Mathematics and Statistics, vol. 8, no. 2, pp. 176–184, 2012. View at: Publisher Site | Google Scholar
  24. I. K. Youssef and A. A. Taha, “On the modified successive overrelaxation method,” Applied Mathematics and Computation, vol. 219, no. 9, pp. 4601–4613, 2013. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2014 Shi-Liang Wu and Yu-Jun Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1070 Views | 696 Downloads | 1 Citation
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.