Generalized Refinement of Gauss-Seidel Method for Consistently Ordered 2-Cyclic Matrices
This paper presents generalized refinement of Gauss-Seidel method of solving system of linear equations by considering consistently ordered 2-cyclic matrices. Consistently ordered 2-cyclic matrices are obtained while finite difference method is applied to solve differential equation. Suitable theorems are introduced to verify the convergence of this proposed method. To observe the effectiveness of this method, few numerical examples are given. The study points out that, using the generalized refinement of Gauss-Seidel method, we obtain a solution of a problem with a minimum number of iteration and obtain a greater rate of convergence than other previous methods.
Consider the problem of large and sparse linear systems of the formwhere is a nonsingular real matrix of order , is a given -dimensional real vector, and is an -dimensional vector to be determined. By splitting into , where is a diagonal matrix with , is strictly lower, and is strictly an upper triangular part of , different iterative methods were developed. In recent years, research results show that generalized, refinement, and extrapolation (acceleration or relaxation) are used for modifying the Gauss-Seidel method. Salkuyeh  introduced the generalized Gauss-Seidel (GGS) method and discussed the convergence of the method by considering strictly diagonally dominant (SDD) and M-matrices. The refinement of the Gauss-Seidel (RGS) method was studied by Vatti and Eneyew  and proved the convergence of the method by taking SDD matrices. Recently, Enyew et al.  developed a second refinement of the Gauss-Seidel method on SDD, symmetric positive definite (SPD), and M-matrices. All the above researchers tried to minimize the number of iterations and improved the rate of convergence. But still, different researchers have been conducting research in the area of iterative methods. In line with this, in this paper, we study the generalized refinement of Gauss-Seidel (GRGS) iterative method which is used to accelerate the convergence of the basic Gauss-Seidel method. Here, for the GRGS method, we mean the-RGS method, where.
Some of the basic iterative methods are given below.
The Jacobi () iterative method is
Definition 1. The spectrum of a matrixis defined by.That is, the spectrum of a matrixis the set of all its eigenvalues.
Definition 2. The spectral radius of a matrixis denoted byand is defined by.
Definition 3. The (asymptotic) rate of convergence ofis denoted byand is defined by.
Definition 4 (). A matrixis said to be consistently ordered two-cyclic ifis independent of.
Theorem 5 (). Letbe consistently ordered and 2-cyclic with nonvanishing diagonal elements, and let.Then,(a)if is any eigenvalue of of multiplicity , then is also an eigenvalue of of multiplicity (b) satisfiesfor some eigenvalue of if and only if satisfies for some eigenvalue of .
Theorem 6 (). Letbe consistently ordered and 2-cyclic with nonzero diagonal elements. Then,
Theorem 7 (). For consistently ordered 2-cyclic matrices, if the Jacobi method converges, so does the Gauss-Seidel method, and the Gauss-Seidel method converges twice as fast as the Jacobi method.
We used the following algorithms for nonstationary methods like conjugate gradient (CG), biconjugate gradient (BiCG), and minimal residual (MINRES) methods:(a)Algorithm for the conjugate gradient (CG) methodGiven , find and set . For, until convergence, do(b)Algorithm for the biconjugate gradient (BiCG) methodChoose initial guess . Then, find for and set and . For , do(c)Minimal residual (MINRES) methodChoose initial guess , then find and . For , do
3. Main Result
The refinement of Gauss-Seidel (RGS) and second refinement of Gauss-Seidel (SRGS) methods are a modification of the Gauss-Seidel method. Using a system of linear equations in the form (1), we can derive the refinement of Gauss-Seidel (RGS) method as the following procedures: . By adding to both sides, we get .
This could be written as , where .
After arranging and multiplying both sides by , we get . Thus,or
Equations (10) or (11) are refiner of the Gauss-Seidel method. By taking either of the above equations (10) or (11), substitute Gauss-Seidel scheme (3) in the place of to get the refinement of Gauss-Seidel method. Thus, we can derive the generalized refinement of Gauss-Seidel method:
If , the scheme is reduced to the Gauss-Seidel method.
If , the method is reduced to the refinement of Gauss-Seidel method.
If , the scheme is reduced to the second refinement of Gauss-Seidel method and so on.
3.1. Algorithm for GRGS Method
(1)Decompose as , where is nonsingular and is nonzero matrices; initial approximation ; tolerance TOL; maximum number of iterations .(2)Choose
For , until convergence.
If or 2, then stop.
3.2. Convergence of Generalized Refinement of Gauss-Seidel Method
Theorem 8. Ifis consistently ordered 2-cyclic matrices, then.The Gauss-Seidel method converges if and only if the generalized refinement of Gauss-Seidel method converges, and the GRGS method converges()times as fast as Gauss-Seidel method andtimes as fast as the Jacobi method.
Proof. Let be consistently ordered matrix with nonvanishing diagonal elements and 2-cyclic matrix. Then, by Theorem 6, we have . This implies that the Gauss-Seidel method converges twice as fast as the Jacobi method. Hence, based on Theorem 7, the Jacobi method is convergent if and only if the Gauss-Seidel method is convergent. Similarly, if GS converges, then RGS, SRGS, -RGS, …, and GRGS methods converge.Again,and so on. Thus, the spectral radius of the GRGS method is as follows:Therefore, the GRGS method converges () times as fast as the Gauss-Seidel method and times as fast as the Jacobi method.
Here, one can deduce that the number of iterations of the GRGS method denoted by isSimilarly, the rate of convergence of GRGS is denoted by such that☐
Theorem 9. The generalized refinement of Gauss-Seidel method converges faster than the Gauss-Seidel method and refinement of Gauss-Seidel method when the Gauss-Seidel method is convergent.
Proof. We can rewrite equation (3), RGS, SRGS, and (12), respectively, by , , , and , where , , , , and , given that. Let be the exact solution of (1). That is, , , , and . Let be nonnegative integers.☐
If we consider the Gauss-Seidel method,
Consider the refinement of Gauss-Seidel method.
Consider the second-refinement of Gauss-Seidel method.
Finally, let us consider the generalized refinement of Gauss-Seidel method.
Therefore, the generalized refinement of Gauss-Seidel method converges faster than the Gauss-Seidel method, refinement of Gauss-Seidel method, and second-refinement of Gauss-Seidel method.
3.3. Numerical Examples
Example 1. Consider an M-matrix A (or 2-cyclic matrix A), which arises from discretization of the Poisson equations , on the unit square as considered . If the system has the formthen solve it with .
Solution 1. For , we obtain optimal values of .
From Table 1, one can visualize that the GRGS method is much better than the SOR method. When we choose the -RGS method we get the solution at the first iteration, which means finding a solution using a direct method. The -RGS method is even better than nonstationary methods like CG, BiCG, and MINRES methods.
Example 2. Consider the steady-state heat distribution in a thin square metal plate with dimensions 0.5 m by 0.5 m using . Two adjacent boundaries are held at 0°C, and the heat on the other boundaries increases linearly from 0°C at one corner to 100°C where the sides meet.
Solution. Place the sides with the zero boundary conditions along the - and -axes. Then, the problem is expressed as , for () in the set . The boundary conditions are , and . If , the problem has the grid, and the difference equation is , for each and , given in . The exact solution is , such that , , , , , , , , and . The matrix has the form:
For , we get optimal values of .
As illustrated in Table 2, the -RGS method is better than other listed methods. This is the second example to show that the GRGS method is much better than even the nonstationary method like CG, BiCG, and MINRES methods.
Example 3. Consider the Poisson equationon the square with sides for in the set with on the boundary with mesh .
Solution. Using a finite difference formula, we get the following system:
For , one can get the optimal values of .
Table 3 shows that the -RGS method has a minimum spectral radius and maximum rate of convergence compared to methods listed in the table. We deduce that the GRGS method is preferable than any other iterative methods even the nonstationary method. Therefore, this method is equivalent to the direct methods.
Large and sparse linear systems of equations which arise from the discretization of PDE and ODE problems are often solved by iterative methods. The iterative method contributed in this study is the GRGS method. The characteristics of the GRGS method compared to other iterative methods are summarized as follows:(i)The number of iterations of the GRGS method
. The number of iterations of the -RGS method is approximately equal to of the number of iterations of GS and of the number of iterations of the RGS method and so on.(ii)The spectral radius of GRGS method
. Thus, GRGS is faster than J, GS, RGS, and SRGS.(iii)Rate of convergence of the GRGS method
. Therefore, the GRGS method has largest rate of convergence than J, GS, RGS, and SRGS.(iv)The SOR iterative method has the smallest number of iteration as compared to J and GS methods but it has a large number of iterations as compared to the GRGS method. Even though the SOR method is the best method for consistently ordered matrices, our new modified method GRGS is much better than SOR method
No data were used to support this study.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
D. K. Salkuyeh, “Generalized Jacobi and Gauss-Seidel methods for solving linear system of equations,” NUMERICAL MATHEMATICS-ENGLISH SERIES, vol. 16, no. 2, pp. 164–170, 2007.View at: Google Scholar
V. B. K. Vatti and T. K. Eneyew, “A refinement of Gauss-Seidel method for solving of linear system of equations,” International Journal of Contemporary Mathematical Sciences, vol. 6, no. 3, pp. 117–121, 2011.View at: Google Scholar
D. M. Young, Iterative Solution of Large Linear Systems, The University of Texas, 1971.
B. N. Datta, Numerical Linear Algebra and Applications, Brooks-Cole pub. co., 1995.
R. L. Burden and J. D. Faires, “Numerical Analysis,” in Cengage Learning, Brook/Cole, 2011.View at: Google Scholar