Numerical Methods of Complex Valued Linear Algebraic SystemView this Special Issue
A New Version of the Accelerated Overrelaxation Iterative Method
Hadjidimos (1978) proposed a classical accelerated overrelaxation (AOR) iterative method to solve the system of linear equations, and discussed its convergence under the conditions that the coefficient matrices are irreducible diagonal dominant, L-matrices, and consistently orders matrices. In this paper, a new version of the AOR method is presented. Some convergence results are derived when the coefficient matrices are irreducible diagonal dominant, H-matrices, symmetric positive definite matrices, and L-matrices. A relational graph for the new AOR method and the original AOR method is presented. Finally, a numerical example is presented to illustrate the efficiency of the proposed method.
Consider the following linear system: where , are given and is unknown. System of form (1) appears in many applications such as linear elasticity, fluid dynamics, and constrained quadratic programming [1–4]. When the coefficient matrix of the linear system (1) is large and sparse, iterative methods are recommended against direct methods. In order to solve (1) more effectively by using the iterative methods, usually, efficient splittings of the coefficient matrix are required. For example, the classical Jacobi and Gauss-Seidel iterations are obtained by splitting the matrix into its diagonal and off-diagonal parts.
For the numerical solution of (1), the accelerated overrelaxation (AOR) method was introduced by Hadjidimos in  and is a two-parameter generalization of the successive overrelaxation (SOR) method. In certain cases the AOR method has better convergence rate than Jacobi, JOR, Gauss-Seidel, or SOR method [5, 6]. Sufficient conditions for the convergence of the AOR method have been considered by many authors including [6–14]. To improve the convergence rate of the AOR method, the preconditioned AOR (PAOR) method has been considered by many authors including [15–21]. Although Krylov subspace methods [4, 22] are considered as one kind of the important and efficient iterative techniques for solving the large sparse linear systems because these methods are cheap to be implemented and are able to fully exploit the sparsity of the coefficient matrix, Krylov subspace methods are very slow or even fail to converge when the coefficient matrix of (1) is often extremely ill-conditioned and highly indefinite.
The purpose of this paper is to present a new version of the accelerated overrelaxation (AOR) method for the linear system (1), which is called the quasi accelerated overrelaxation (QAOR) method. We discuss some sufficient conditions for the convergence of the QAOR method when the coefficient matrices are irreducible diagonal dominant, -matrices, symmetric positive definite matrices, and -matrices.
The remainder of the paper is organized as follows. In Section 2 the QAOR method is derived. In Section 3, some convergence results are given for the QAOR method when the coefficient matrices are irreducible diagonal dominant, -matrices, symmetric positive definite matrices, and -matrices. A relational graph for QAOR and AOR is presented in Section 4. Finally, in Section 5 a numerical example is presented to illustrate the efficiency of the proposed method.
2. The QAOR Method
To introduce the QAOR method, firstly, a brief review of the classical AOR method is required.
For any splitting, with , the basic iterative method for solving (1) is Let where is a nonsingular diagonal matrix and and are strictly lower and upper triangular matrices, respectively. Then the classical AOR method in  is defined: where is an acceleration parameter and is an overrelaxation parameter. Its iterative matrix is where and . Obviously, the iterative matrix of the Jacobi method is , the iterative matrix of the Gauss-Seidel method is , and the iterative matrix of the successive overrelaxation (SOR) method is .
In fact, if we introduce matrices then Therefore, one can readily verify that the AOR method can be induced by the matrix splitting .
To establish the QAOR method, we consider the following matrix splitting of the coefficient matrix ; that is to say, Then Based on the above matrix splitting (8), the QAOR method is defined as follows: and its iterative matrix is
Comparing the QAOR method with the AOR method, it is easy to see that the iteration matrix of the QAOR method is similar to that of the AOR method. Based on this fact, the QAOR method may conserve all the advantages of the AOR method. If , the QAOR reduces to the QSOR method. The QSOR method is called the KSOR method as well [23, 24].
Next, we will discuss some sufficient conditions for the convergence of the QAOR method when the coefficient matrices are irreducible diagonal dominant, -matrices, symmetric positive definite matrices, and -matrices.
3. Main Results
When is an irreducible matrix with weak diagonal dominance, obviously, both the coefficient matrix and the corresponding diagonal matrix are nonsingular. Based on this case, we have the following theorem for the QAOR method.
Theorem 1. If is an irreducible matrix with weak diagonal dominance, then the QAOR method converges for all and .
Proof. We assume that for the eigenvalue of we have . For this eigenvalue the relationship below holds: By performing a simple series of transformations, we have where The coefficients of and in (14) are less than one in modulus. To prove this it is sufficient and necessary to prove that If where and are real with , then the first inequality in (15) is equivalent to which holds for (in this case, obviously, ). Since , (16) holds for all real if and only if it holds for . Thus, (16) is equivalent to which is true. The second inequality in (15) is equivalent to which, for the same reason, must be satisfied for . Thus, we have which is also true. That is, for all and , is nonsingular which contradicts with . Therefore, .
When is an -matrix, it follows that with
Theorem 2. If is an -matrix and , then the QAOR method converges.
Proof. Let . Then Let . Then Obviously, we have which implies if and only if is a monotone matrix. Since is an -matrix, then is a monotone matrix. Therefore it is completed.
Let When is symmetric positive definite, obviously, is nonsingular. It is easy to see that In this case, the QAOR method converges if is positive definite . By the simple computations, we have That is to say, the QAOR method converges if is positive definite. Let be eigenvalues of . The left in (29) is positive definite if and only if Since and are similar, then both have the same eigenvalues. Let . If the following inequality is satisfied then the QAOR method converges. Therefore, we have the following theorem.
Theorem 3. Assume that is symmetric positive definite. Let be eigenvalues of , , and . If then the QAOR method converges.
When is an -matrix, the following theorem is derived.
Theorem 4. If is an -matrix and , then the QAOR method converges for .
Proof. Assume that . Based on our assumptions, we easily get that Thus, for the iteration matrix we have That is, is a nonnegative matrix. If is the corresponding eigenvector, we have which is equivalent to From (37), we have Obviously, . Therefore, Combining (38) with (39), we have By simple manipulation, we have If , then which implies so that if then so does the QAOR method.
Further, we have the following theorem.
Theorem 5. Let be an L-matrix and . If , then
Proof. Based on Theorem 4, obviously, here it is need to prove Let . There exists a nonzero vector such that which is equivalent to Let Obviously, . Since , we have which implies . Therefore, we have That is to say, which is completed.
4. A Relational Graph for QAOR and AOR
Based on the above discussion, we have Let and . Therefore, we have That is to say, when and , the QAOR method reduces to the AOR method. Based on this case, Figure 1 describes the relationship between the QAOR method and the AOR method.
5. Numerical Example
Now let us consider the following example to assess the feasibility and effectiveness of the QAOR iteration method. Suppose that and the coefficient matrix of (1) is given by
The initial guess for all tests is zero. The tests are performed in MATLAB 7.0. In Tables 1 and 2, we list the value of the spectral radius of iterative matrix, the iteration numbers (IT), and CPU's time (CPU) with the different value of and when the QAOR (QSOR) iteration is used to solve the linear system (1).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
This research was supported by NSFC (no. 11301009), by Science & Technology Development Plan of Henan Province (no. 122300410316), and in part by the Natural Science Foundations of Henan Province (no. 13A110022).
R. S. Varga, Matrix Iterative Analysis, Springer, Berlin, Germany, 2000.
D. M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York, NY, USA, 1971.View at: MathSciNet
A. Berman and R. J. Plemons, Nonnegative Matrices in the Mathematics Sciences, SIAM, Philadelphia, Pa, USA, 1994.
Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing, Boston, Mass, USA, 1996.