Abstract and Applied Analysis

Abstract and Applied Analysis / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 8343207 | https://doi.org/10.1155/2021/8343207

Gashaye Dessalew, Tesfaye Kebede, Gurju Awgichew, Assaye Walelign, "Generalized Refinement of Gauss-Seidel Method for Consistently Ordered 2-Cyclic Matrices", Abstract and Applied Analysis, vol. 2021, Article ID 8343207, 7 pages, 2021. https://doi.org/10.1155/2021/8343207

Generalized Refinement of Gauss-Seidel Method for Consistently Ordered 2-Cyclic Matrices

Academic Editor: Simeon Reich
Received08 Apr 2021
Accepted24 May 2021
Published01 Jun 2021

Abstract

This paper presents generalized refinement of Gauss-Seidel method of solving system of linear equations by considering consistently ordered 2-cyclic matrices. Consistently ordered 2-cyclic matrices are obtained while finite difference method is applied to solve differential equation. Suitable theorems are introduced to verify the convergence of this proposed method. To observe the effectiveness of this method, few numerical examples are given. The study points out that, using the generalized refinement of Gauss-Seidel method, we obtain a solution of a problem with a minimum number of iteration and obtain a greater rate of convergence than other previous methods.

1. Introduction

Consider the problem of large and sparse linear systems of the formwhere is a nonsingular real matrix of order , is a given -dimensional real vector, and is an -dimensional vector to be determined. By splitting into , where is a diagonal matrix with , is strictly lower, and is strictly an upper triangular part of , different iterative methods were developed. In recent years, research results show that generalized, refinement, and extrapolation (acceleration or relaxation) are used for modifying the Gauss-Seidel method. Salkuyeh [1] introduced the generalized Gauss-Seidel (GGS) method and discussed the convergence of the method by considering strictly diagonally dominant (SDD) and M-matrices. The refinement of the Gauss-Seidel (RGS) method was studied by Vatti and Eneyew [2] and proved the convergence of the method by taking SDD matrices. Recently, Enyew et al. [3] developed a second refinement of the Gauss-Seidel method on SDD, symmetric positive definite (SPD), and M-matrices. All the above researchers tried to minimize the number of iterations and improved the rate of convergence. But still, different researchers have been conducting research in the area of iterative methods. In line with this, in this paper, we study the generalized refinement of Gauss-Seidel (GRGS) iterative method which is used to accelerate the convergence of the basic Gauss-Seidel method. Here, for the GRGS method, we mean the-RGS method, where.

2. Preliminaries

Some of the basic iterative methods are given below.

The Jacobi () iterative method is

The Gauss-Seidel (GS) iterative method isand the successive overrelaxation (SOR) iterative method iswhere . Equations (2), (3), and (4) can be denoted by , , and , where , , , , , and , respectively.

Definition 1. The spectrum of a matrixis defined by.That is, the spectrum of a matrixis the set of all its eigenvalues.

Definition 2. The spectral radius of a matrixis denoted byand is defined by.

Definition 3. The (asymptotic) rate of convergence ofis denoted byand is defined by.

Definition 4 ([4]). A matrixis said to be consistently ordered two-cyclic ifis independent of.

Theorem 5 ([5]). Letbe consistently ordered and 2-cyclic with nonvanishing diagonal elements, and let.Then,(a)if is any eigenvalue of of multiplicity , then is also an eigenvalue of of multiplicity (b) satisfiesfor some eigenvalue of if and only if satisfies for some eigenvalue of .

Theorem 6 ([6]). Letbe consistently ordered and 2-cyclic with nonzero diagonal elements. Then,

Theorem 7 ([6]). For consistently ordered 2-cyclic matrices, if the Jacobi method converges, so does the Gauss-Seidel method, and the Gauss-Seidel method converges twice as fast as the Jacobi method.
We used the following algorithms for nonstationary methods like conjugate gradient (CG), biconjugate gradient (BiCG), and minimal residual (MINRES) methods:(a)Algorithm for the conjugate gradient (CG) methodGiven , find and set . For, until convergence, do(b)Algorithm for the biconjugate gradient (BiCG) methodChoose initial guess . Then, find for and set and . For , do(c)Minimal residual (MINRES) methodChoose initial guess , then find and . For , do

3. Main Result

The refinement of Gauss-Seidel (RGS) and second refinement of Gauss-Seidel (SRGS) methods are a modification of the Gauss-Seidel method. Using a system of linear equations in the form (1), we can derive the refinement of Gauss-Seidel (RGS) method as the following procedures: . By adding to both sides, we get .

This could be written as , where .

After arranging and multiplying both sides by , we get . Thus,or

Equations (10) or (11) are refiner of the Gauss-Seidel method. By taking either of the above equations (10) or (11), substitute Gauss-Seidel scheme (3) in the place of to get the refinement of Gauss-Seidel method. Thus, we can derive the generalized refinement of Gauss-Seidel method:

Equation (12) is called the generalized refinement of Gauss-Seidel (GRGS) method or the th-refinement of Gauss-Seidel (th-RGS) method. Equation (12) can be denoted by either or , where .

If , the scheme is reduced to the Gauss-Seidel method.

If , the method is reduced to the refinement of Gauss-Seidel method.

If , the scheme is reduced to the second refinement of Gauss-Seidel method and so on.

3.1. Algorithm for GRGS Method

(1)Decompose as , where is nonsingular and is nonzero matrices; initial approximation ; tolerance TOL; maximum number of iterations .(2)Choose

For , until convergence.

Do .

If or 2, then stop.

3.2. Convergence of Generalized Refinement of Gauss-Seidel Method

Theorem 8. Ifis consistently ordered 2-cyclic matrices, then.The Gauss-Seidel method converges if and only if the generalized refinement of Gauss-Seidel method converges, and the GRGS method converges()times as fast as Gauss-Seidel method andtimes as fast as the Jacobi method.

Proof. Let be consistently ordered matrix with nonvanishing diagonal elements and 2-cyclic matrix. Then, by Theorem 6, we have . This implies that the Gauss-Seidel method converges twice as fast as the Jacobi method. Hence, based on Theorem 7, the Jacobi method is convergent if and only if the Gauss-Seidel method is convergent. Similarly, if GS converges, then RGS, SRGS, -RGS, …, and GRGS methods converge.Again,and so on. Thus, the spectral radius of the GRGS method is as follows:Therefore, the GRGS method converges () times as fast as the Gauss-Seidel method and times as fast as the Jacobi method.
Here, one can deduce that the number of iterations of the GRGS method denoted by isSimilarly, the rate of convergence of GRGS is denoted by such that

Theorem 9. The generalized refinement of Gauss-Seidel method converges faster than the Gauss-Seidel method and refinement of Gauss-Seidel method when the Gauss-Seidel method is convergent.

Proof. We can rewrite equation (3), RGS, SRGS, and (12), respectively, by , , , and , where , , , , and , given that. Let be the exact solution of (1). That is, , , , and . Let be nonnegative integers.☐

If we consider the Gauss-Seidel method,

Consider the refinement of Gauss-Seidel method.

Consider the second-refinement of Gauss-Seidel method.

Finally, let us consider the generalized refinement of Gauss-Seidel method.

According to the coefficients of above inequalities (23), (29), (35), and (42), we have since .

Therefore, the generalized refinement of Gauss-Seidel method converges faster than the Gauss-Seidel method, refinement of Gauss-Seidel method, and second-refinement of Gauss-Seidel method.

3.3. Numerical Examples

Example 1. Consider an M-matrix A (or 2-cyclic matrix A), which arises from discretization of the Poisson equations , on the unit square as considered [6]. If the system has the formthen solve it with .

Solution 1. For , we obtain optimal values of .

From Table 1, one can visualize that the GRGS method is much better than the SOR method. When we choose the -RGS method we get the solution at the first iteration, which means finding a solution using a direct method. The -RGS method is even better than nonstationary methods like CG, BiCG, and MINRES methods.

Example 2. Consider the steady-state heat distribution in a thin square metal plate with dimensions 0.5 m by 0.5 m using . Two adjacent boundaries are held at 0°C, and the heat on the other boundaries increases linearly from 0°C at one corner to 100°C where the sides meet.

Solution. Place the sides with the zero boundary conditions along the - and -axes. Then, the problem is expressed as , for () in the set . The boundary conditions are , and . If , the problem has the grid, and the difference equation is , for each and , given in [7]. The exact solution is , such that , , , , , , , , and . The matrix has the form:


MethodsNumber of iterationsSpectral radiusRate of convergence

J190.60360.2193
GS90.36430.4385
SOR50.11280.9477
CG6
BiCG6
MINRES19
RGS50.13270.8771
SRGS30.04831.3161
-RGS30.01761.7545
-RGS20.00642.1938
-RGS20.00232.6383
-RGS20.000853.07058
-RGS20.000313.5086
-RGS10.000113.9586

For , we get optimal values of .

As illustrated in Table 2, the -RGS method is better than other listed methods. This is the second example to show that the GRGS method is much better than even the nonstationary method like CG, BiCG, and MINRES methods.

Example 3. Consider the Poisson equationon the square with sides for in the set with on the boundary with mesh .

Solution. Using a finite difference formula, we get the following system:


MethodsNumber of iterationsSpectral radiusRate of convergence

J390.70710.1505
GS200.50.3010
SOR110.17160.7655
CG5
BiCG8
MINRES33
RGS100.0.250.6021
SRGS70.1250.9031
-RGS50.0.06251.2041
-RGS40.03131.5045
-RGS40.01561.8069
-RGS30.00782.1079
-RGS30.00392.4089
-RGS30.0022.699
-RGS20.0013

For , one can get the optimal values of .

Table 3 shows that the -RGS method has a minimum spectral radius and maximum rate of convergence compared to methods listed in the table. We deduce that the GRGS method is preferable than any other iterative methods even the nonstationary method. Therefore, this method is equivalent to the direct methods.


MethodsNumber of iterationsSpectral radiusRate of convergence

J180.70710.1505
GS110.50.3010
SOR80.17160.7655
CG5
BiCG5
MINRES19
RGS60.250.6021
SRGS40.1250.9031
-RGS30.06251.2041
-RGS20.03131.5045
-RGS20.01561.8069
-RGS20.00782.1079
-RGS20.00392.4089
-RGS20.0022.699
-RGS20.0013

4. Conclusion

Large and sparse linear systems of equations which arise from the discretization of PDE and ODE problems are often solved by iterative methods. The iterative method contributed in this study is the GRGS method. The characteristics of the GRGS method compared to other iterative methods are summarized as follows:(i)The number of iterations of the GRGS method

. The number of iterations of the -RGS method is approximately equal to of the number of iterations of GS and of the number of iterations of the RGS method and so on.(ii)The spectral radius of GRGS method

. Thus, GRGS is faster than J, GS, RGS, and SRGS.(iii)Rate of convergence of the GRGS method

. Therefore, the GRGS method has largest rate of convergence than J, GS, RGS, and SRGS.(iv)The SOR iterative method has the smallest number of iteration as compared to J and GS methods but it has a large number of iterations as compared to the GRGS method. Even though the SOR method is the best method for consistently ordered matrices, our new modified method GRGS is much better than SOR method

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. D. K. Salkuyeh, “Generalized Jacobi and Gauss-Seidel methods for solving linear system of equations,” NUMERICAL MATHEMATICS-ENGLISH SERIES, vol. 16, no. 2, pp. 164–170, 2007. View at: Google Scholar
  2. V. B. K. Vatti and T. K. Eneyew, “A refinement of Gauss-Seidel method for solving of linear system of equations,” International Journal of Contemporary Mathematical Sciences, vol. 6, no. 3, pp. 117–121, 2011. View at: Google Scholar
  3. T. K. Enyew, G. Awgichew, E. Haile, and G. D. Abie, “Second-refinement of Gauss-Seidel iterative method for solving linear system of equations,” Ethiopian Journal of Science and Technology, vol. 13, no. 1, pp. 1–15, 2020. View at: Publisher Site | Google Scholar
  4. A. Hadjidimos, “Successive overrelaxation (SOR) and related methods,” Journal of Computational and Applied Mathematics, vol. 123, no. 1-2, pp. 177–199, 2000. View at: Publisher Site | Google Scholar
  5. D. M. Young, Iterative Solution of Large Linear Systems, The University of Texas, 1971.
  6. B. N. Datta, Numerical Linear Algebra and Applications, Brooks-Cole pub. co., 1995.
  7. R. L. Burden and J. D. Faires, “Numerical Analysis,” in Cengage Learning, Brook/Cole, 2011. View at: Google Scholar

Copyright © 2021 Gashaye Dessalew et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1451
Downloads838
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.