About this Journal Submit a Manuscript Table of Contents
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 867598, 14 pages
http://dx.doi.org/10.1155/2012/867598
Research Article

Convergence Analysis of the Preconditioned Group Splitting Methods in Boundary Value Problems

School of Mathematical Sciences, Universitiy Sains Malaysia, 11800 USM, Pulau Pinang, Malaysia

Received 17 May 2012; Accepted 12 July 2012

Academic Editor: Ravshan Ashurov

Copyright © 2012 Norhashidah Hj. Mohd Ali and Abdulkafi Mohammed Saeed. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The construction of a specific splitting-type preconditioner in block formulation applied to a class of group relaxation iterative methods derived from the centred and rotated (skewed) finite difference approximations has been shown to improve the convergence rates of these methods. In this paper, we present some theoretical convergence analysis on this preconditioner specifically applied to the linear systems resulted from these group iterative schemes in solving an elliptic boundary value problem. We will theoretically show the relationship between the spectral radiuses of the iteration matrices of the preconditioned methods which affects the rate of convergence of these methods. We will also show that the spectral radius of the preconditioned matrices is smaller than that of their unpreconditioned counterparts if the relaxation parameter is in a certain optimum range. Numerical experiments will also be presented to confirm the agreement between the theoretical and the experimental results.

1. Introduction

Consider the finite difference discretisation schemes for solving the following boundary value problem which is the two-dimensional Poisson equation with Dirichlet boundary conditions: 𝑢𝑥𝑥+𝑢𝑦𝑦=𝑓(𝑥,𝑦),(𝑥,𝑦)Ω,𝑢(𝑥,𝑦)=𝑔(𝑥,𝑦),(𝑥,𝑦)𝜕Ω.(1.1) Here, Ω is a continuous unit square solution domain with boundary 𝜕Ω. This equation plays a very important role in the modelers of fluid flow phenomena and heat conduction problems. Let Ω be discretised uniformly in both 𝑥 and 𝑦 directions with a mesh size =1/𝑛, where 𝑛 is an integer. The simplest finite difference approximation of the Laplacian is 𝑢𝑖+1,𝑗2𝑢𝑖𝑗+𝑢𝑖1,𝑗2+𝑢𝑖,𝑗+12𝑢𝑖𝑗+𝑢𝑖,𝑗121122𝜕4𝑢𝜕𝑥4+𝜕4𝑢𝜕𝑦4𝑂4=𝑓𝑖𝑗.(1.2) Here, 𝑢𝑖𝑗=𝑢(𝑥𝑖,𝑦𝑗). Another approximation to (1.1) can be derived from the rotated five-point finite-difference approximation to give [1] 𝑢𝑖+1,𝑗+1+𝑢𝑖1,𝑗1+𝑢𝑖+1,𝑗1+𝑢𝑖1,𝑗+14𝑢𝑖𝑗22212𝜕4𝑢𝜕𝑥2𝜕𝑦2+1𝜕124𝑢𝜕𝑥4+𝜕4𝑢𝜕𝑦4𝑂4=𝑓𝑖𝑗.(1.3) Based on the latter approximation, improved point and group iterative schemes have been developed over the last few years in solving several types of partial differential equations [15]. In particular, the Modified Explicit Decoupled Group (MEDG) method [6, 7] was formulated as the latest addition to this family of four-point explicit group methods in solving the Poisson equation. This method has been shown to be the fastest among the existing explicit group methods due to its lesser computational complexity.

Since it is well known that preconditioners play a vital role in accelerating the convergence rates of iterative methods, several preconditioned strategies have been used for improving the convergence rate of the explicit group methods derived from the standard and skewed (rotated) finite difference operators [811]. In particular, Saeed and Ali [1214] presented an (𝐼+𝑆)-type preconditioning matrix applied to the systems obtained from the four-point Explicit Decoupled Group (EDG) and the Modified Explicit Decoupled Group (MEDG) methods for solving the elliptic partial differential equation, where 𝑆 is obtained by taking the first upper diagonal groups of the iteration matrix of the original system. The numerical experiments performed on these methods were seen to yield very encouraging results. However, no detailed studies of the spectral radius analysis of all these preconditioned systems have been done to confirm the superiority of this preconditioner.

The focus of this study is to establish the convergence properties of the preconditioned systems based on the splitting-type preconditioner (𝐼+𝑆) for improving the performance and reliability of this family of explicit group methods derived from the rotated finite-difference formula. We will prove that this type of preconditioner applied to the MEDG SOR can minimize the most the spectral radius of the preconditioned matrix provided the relaxation parameter is in a certain optimum range. This paper is organised as follows: in Section 2, we give a presentation of the preconditioner applied to the system resulted from the EDG SOR method. A brief description of the application of the preconditioner in block formulation to the MEDG SOR is given in Section 3. The theoretical convergence analysis of these methods is discussed in Section 4. In Section 5, we give a numerical example to confirm the results obtained in Section 4. Finally, we report a brief conclusion in Section 6.

2. Preconditioned Explicit Decoupled Group SOR (EDG SOR)

For convenience, we will now briefly explain some of the definitions used in this paper.

Definition 2.1 (see [15]). A matrix 𝐴 of order 𝑛 has property 𝐴 if there exists two disjoint subsets 𝑆 and 𝑇 of 𝑊={1,2,,𝑛} such that if 𝑖𝑗 and if either 𝑎𝑖𝑗0 and 𝑎𝑖𝑗0, then 𝑖𝑆 and 𝑗𝑇 or else 𝑖𝑇 and 𝑗𝑆.

Definition 2.2 (see [3]). An ordered grouping 𝜋 of 𝑊={1,2,,𝑛} is a subdivision of 𝑊 into disjoint subsets 𝑅1,𝑅2,,𝑅𝑞 such that 𝑅1+𝑅2++𝑅𝑞=𝑊. Given a matrix 𝐴 and an ordered grouping 𝜋, we define the submatrices 𝐴𝑚,𝑛 for 𝑚,𝑛=1,2,𝑞 as follows: 𝐴𝑚,𝑛 is formed from 𝐴 deleting all rows except those corresponding to 𝑅𝑚 and all columns except those corresponding to 𝑅𝑛.

Definition 2.3 (see [3]). Let 𝜋 be an ordered grouping with 𝑞 groups. A matrix 𝐴 has Property 𝐴(𝜋) if the 𝑞×𝑞 matrix 𝑍=(𝑧𝑟,𝑠) defined by 𝑧𝑟,𝑠={0 if 𝐴𝑟,𝑠=0 or 1 if 𝐴𝑟,𝑠0} has Property 𝐴.

Definition 2.4 (see [15]). A matrix 𝐴 of order 𝑛 is consistently ordered if for some t there exist disjoint subsets 𝑆1,𝑆2,,𝑆𝑡 of 𝑊={1,2,,𝑛} such that 𝑡𝑘=1𝑆𝑘=𝑊 and such that if 𝑖 and 𝑗 are associated, then 𝑗𝑆𝑘+1 if 𝑗>𝑖 and 𝑗𝑆𝑘1 if 𝑗<𝑖, where 𝑆𝑘 is the subset containing i.
Note that a matrix 𝐴 is a 𝜋-consistently ordered matrix if the matrix 𝑍 in Definition 2.3 is consistently ordered.
From the discretisation of the EDG finite-difference formula in solving the Poisson equation, the linear system 𝐴𝑢=𝑏(2.1) is obtained with [1] 𝑅𝐴=0𝑅1𝑅2𝑅0𝑅1𝑅2𝑅0𝑅1𝑅2𝑅0(𝑁1)2/2×(𝑁1)2/2,𝑅0=𝑅00𝑅01𝑅02𝑅00𝑅01𝑅02𝑅00(𝑁1)×(𝑁1),𝑅00=114141,𝑅01=10040,𝑅02=𝑅𝑇01,𝑅1=𝑅01𝑅01𝑅01𝑅01𝑅01(𝑁1)×(𝑁1),𝑅2=𝑅02𝑅02𝑅02𝑅02𝑅02(𝑁1)×(𝑁1).(2.2) Let 𝐴𝑛𝑖,𝑛𝑖𝜋,𝑝, (𝑝=𝑁1) be written as 𝐴=𝐷𝐸𝐹, where 𝐷=diag(𝐴11,𝐴22,,𝐴𝑝,𝑝), 𝐸 and 𝐹 are strict block lower triangular, and strict block upper triangular parts of 𝐴. Here, the diagonal entries 𝐴𝑖𝑖 are nonsingular and 𝑛𝑖,𝑛𝑖𝜋,𝑝 denotes the set of all matrices in 𝑛𝑖,𝑛𝑖𝜋,𝑝 which is of the form (2.1) relative to some given block partitioning 𝜋. The block Jacobi iteration matrix is 𝐵𝐽=𝐷1(𝐸+𝐹)=𝐿+𝑈, where 𝐿=𝐷1𝐸,𝑈=𝐷1𝐹, the block Gauss-Seidel iteration matrix is 𝐵GS=(𝐼𝐿)1𝑈, and the Block Successive Over-Relaxation method (BSOR) iteration matrix is 𝐵𝑤=(𝐼𝑤𝐿)1{(1𝑤)𝐼+𝑤𝑈}.(2.3) Since the matrix 𝐴 of (2.1) is a 𝜋-consistently ordered and possesses property 𝐴(𝜋), therefore, the theory of block SOR is valid for this iterative method [1].
The theoretical optimum relaxation factor 𝜔𝑝 for implementing the group SOR iterative scheme can be computed from the formula: 𝜔𝑝=21+1𝜌2,(𝐽)(2.4) where 𝜌(𝐽) is the spectral radius of the group Jacobian iterative matrix. Yousif and Evans [16] gave a good estimate of the spectral radius for the EDG method: 7𝜌(𝐽)=16𝜋22.(2.5) In an effort to further accelerate the convergence rates of this method, Saeed and Ali [12] applied a preconditioner 𝑃 to the linear system (2.1) and transformed it into an equivalent system: 𝑃𝐴𝑢=𝑃𝑏(2.6) with 𝑃=(𝐼+𝑆), where 𝐼 is the identity matrix which has the same dimension as 𝐴 while 𝑆 is obtained by taking the first upper diagonal groups of 𝑅0 in the original system above as the following: 𝑍𝑆=1𝑍1𝑍1(𝑁1)2/2×(𝑁1)2/2,𝑍1=0𝑅010𝑅010(𝑁1)×(𝑁1).(2.7) Here, 0 is a (2×2) null matrix.
The system (2.1) becomes 𝐼+𝑆𝐴𝑢=𝐼+𝑆𝑏.(2.8) Hence, we have the linear system of equations: 𝐴𝑢=𝑏(2.9) with 𝐴=𝐼+𝑆𝐴=𝐼𝐿𝑆𝐿𝑈𝑆+,𝑆𝑈𝑏=𝐼+𝑆𝑏.(2.10) The SOR iteration matrix of this scheme is called the Modified Block Successive Over-Relaxation iteration matrix (MBSOR) and is given by 𝐵𝑤=𝐼𝑤𝐿+𝑆𝐿1(1𝑤)𝐼+𝑤𝑈𝑆+.𝑆𝑈(2.11) The matrix 𝐴 of (2.9) is 𝜋-consistently ordered and possesses property 𝐴(𝜋) [13].

3. Preconditioned-Modified Explicit Decoupled Group SOR (MEDG SOR)

Using the MEDG approximation formula in discretising the Poisson equation, the following system is obtained [6]: 𝐴𝑚𝑢=𝑏𝑚,(3.1) where 𝐴𝑚=𝑅𝑚0𝑅𝑚1𝑅𝑚2𝑅𝑚0𝑅𝑚1𝑅𝑚2𝑅𝑚0𝑅𝑚1𝑅𝑚2𝑅𝑚0(𝑁2)2/2×(𝑁2)2/2,𝑅𝑚0=𝑅𝑚00𝑅𝑚01𝑅𝑚02𝑅𝑚00𝑅𝑚01𝑅𝑚02𝑅𝑚00(𝑁2)×(𝑁2),𝑅𝑚00=114141,𝑅𝑚01=10040,𝑅𝑚02=𝑅𝑇𝑚01,𝑅𝑚1=𝑅𝑚01𝑅𝑚01𝑅𝑚01𝑅𝑚01𝑅𝑚01(𝑁2)×(𝑁2),𝑅𝑚2=𝑅𝑚02𝑅𝑚02𝑅𝑚02𝑅𝑚02𝑅𝑚02(𝑁2)×(𝑁2).(3.2) It is observed that the partitioning of 𝐴𝑚 is in the following block form: 𝐴𝑚=𝐴𝑚11𝐴𝑚12𝐴𝑚21𝐴𝑚22𝐴𝑚23𝐴𝑚32𝐴𝑚33𝐴𝑚(𝑝1)𝑝𝐴𝑚𝑝(𝑝1)𝐴𝑚𝑝𝑝(3.3) with 𝑝=(𝑁2), where 𝐴𝑚𝑖𝑖𝑛𝑖,𝑛𝑖𝜋,𝑝, 𝑖=1,2,,𝑝, and 𝑝𝑖=1𝑛𝑖=𝑛. Let 𝐴𝑚=𝐷𝑚𝐸𝑚𝐹𝑚, where 𝐷𝑚=diag(𝐴𝑚11,𝐴𝑚22,...,𝐴𝑚𝑝𝑝) and 𝐸𝑚=𝐸𝑚𝑖𝑗=𝐴𝑚𝑖𝑗𝐹for𝑗<𝑖0for𝑗𝑖𝑚=𝐹𝑚𝑖𝑗=𝐴𝑚𝑖𝑗for𝑗>𝑖0for𝑗𝑖(3.4) are block matrices consisting of the block diagonal, strict block lower triangular, and strict block upper triangular parts of 𝐴𝑚. Here, the diagonal entries 𝐴𝑚𝑖𝑖 are nonsingular. The block Jacobi iteration matrix is 𝐵𝐽(𝐴𝑚)=𝐷𝑚1(𝐸𝑚+𝐹𝑚)=𝐿𝑚+𝑈𝑚, where 𝐿𝑚=𝐷𝑚1𝐸𝑚, 𝑈𝑚=𝐷𝑚1𝐹𝑚, while the block Gauss-Seidel iteration matrix is 𝐵GS(𝐴𝑚)=(𝐼𝑚𝐿𝑚)1𝑈𝑚. The Block Successive Over-Relaxation method (BSOR) iteration matrix is, therefore, 𝑇𝑤=𝐼𝑚𝑤𝐿𝑚1(1𝑤)𝐼𝑚+𝑤𝑈𝑚.(3.5) Since the matrix 𝐴𝑚 of (3.3) is 𝜋-consistently ordered and possesses property 𝐴(𝜋), the theory of block SOR is also valid for this iterative method and, therefore, is convergent [6].

Similarly, the theoretical optimum relaxation factor 𝜔𝑝 for implementing this group SOR-iterative scheme can be obtained from (2.4). In view of the fact that the grid spacing MEDG=2EDG, an estimate of the spectral radius of the group Jacobian iterative matrix of the MEDG method may be obtained from (2.5) as 𝜌(𝐽)=1143𝜋22.(3.6) Good agreement between the theoretical estimates and experimental values of the optimum relaxation parameters was observed in our numerical experiments. Upon applying the left-sided preconditioner 𝑃=(𝐼+𝑆) to system (3.1), the following system is obtained [13]: 𝑃𝐴𝑚𝑢=𝑃𝑏𝑚(3.7) with 𝑠𝑆=1𝑠1𝑠1(𝑁2)2/2×(𝑁2)2/2,𝑠1=0𝑅𝑚010𝑅𝑚010(𝑁2)×(𝑁2),(3.8) where 0 is a (2×2) null matrix.

The preconditioner 𝑆𝐼+ is of the following form: 𝑠𝐼+𝑆=2𝑠2𝑠2(𝑁2)2/2×(𝑁2)2/2,𝑠2=𝐼0𝑅𝑚01𝐼0𝑅𝑚01𝐼0(𝑁2)×(𝑁2)(3.9) Here, 𝐼0 is a (2×2) identity matrix and the system (3.7) becomes 𝑆𝐴𝐼+𝑚𝑢=𝑆𝑏𝐼+𝑚.(3.10) Hence, 𝐴𝑚𝑢=̃𝑏𝑚,(3.11) where 𝐴𝑚=𝑆𝐴𝐼+𝑚=𝐼𝑚𝐿𝑚𝑆𝐿𝑚𝑈𝑚𝑆+𝑆𝑈𝑚,̃𝑏𝑚=𝑆𝑏𝐼+𝑚.(3.12) The SOR iteration matrix will result in an Improved Modified Block Successive Over-Relaxation iteration matrix (IMBSOR) and is given by 𝑇𝑤=𝐼𝑚𝐿𝑤𝑚+𝑆𝐿𝑚1(1𝑤)𝐼𝑚𝑈+𝑤𝑚𝑆+𝑆𝑈𝑚.(3.13)

4. Convergence Properties of the Preconditioned Group Methods

In this section, we will derive several properties related to the convergence of the preconditioned methods discussed in Sections 2 and 3. We will begin with the presentation of several preliminary relevant theorems and lemmas which are needed for the proof of the convergence properties. The spectral radius of a matrix is denoted by 𝜌(), which is defined as the largest of the moduli of the eigenvalues of the iteration matrix.

Theorem 4.1 (see [15]). If 𝐴=𝑀𝑁 is a regular splitting of the matrix 𝐴 and 𝐴10, then 𝜌𝑀1𝑁=𝜌𝐴1𝑁𝐴1+𝜌1𝑁<1.(4.1) Thus, an iterative method with coefficient matrix 𝑀1𝑁 is convergent for any initial vector 𝑥(0).

An accurate analysis of convergence properties of the SOR method is possible if the matrix 𝐴 is consistently ordered in the following sense (see [17]).

Definition 4.2. A matrix 𝐴 is a generalized (𝑞,𝑟)-consistently ordered matrix (a GCO(𝑞,𝑟)-matrix) if Δ=det(𝛼𝑞𝐸+𝛼𝑟𝐹𝑘𝐷) is independent of 𝛼 for all 𝛼0 and for all 𝑘. Here, 𝐷=diag𝐴 and 𝐸 and 𝐹 are strictly lower and strictly upper triangular matrices, respectively, such that 𝐴=𝐷𝐸𝐹.

Definition 4.3 (see [17]). A matrix 𝐴 of the form (3.3) is said to be generally consistently ordered (𝜋,𝑞,𝑟) or simply GCO(𝜋,𝑞,𝑟), where 𝑞 and 𝑟 are positive integers, if for the partitioning 𝜋 of 𝐴, the diagonal submatrices 𝐴𝑖𝑖,𝑖=1,2,,𝑝(≥2), are nonsingular, and the eigenvalues of 𝐵𝐽(𝛼)=𝛼𝑟𝐿+𝛼𝑞𝑈(4.2) are independent of 𝛼, for all 𝛼0, where 𝐿 and 𝑈 are strict block lower and upper triangular parts of 𝐴 respectively.
For any matrix 𝐶=(𝑐𝑖𝑗) in 𝑛𝑖,𝑛𝑖𝜋,𝑝, let |𝐶| denote the block matrix in 𝑛𝑖,𝑛𝑖𝜋,𝑝 with entries|𝑐𝑖,𝑗|. Given the matrix 𝐵𝐽=𝐿+𝑈,(4.3) then 𝜇 denotes the spectral radius of the matrix: ||𝐵𝐽||=||||𝐿+𝑈,(4.4) so that ||𝐵𝜇=𝜌𝐽||.(4.5)

Lemma 4.4 (see [17]). Let |𝐵𝐽| of (4.4) be a GCO (𝑞,𝑟)-matrix and 𝑝=𝑞+𝑟. Then, for any real nonnegative constant 𝛼, 𝛽, and 𝛾 with 𝛾0 satisfying: 𝛼𝑟𝛽𝑞𝜇𝑝<𝛾𝑝, the matrix 𝐴=𝛾𝐼𝛼|𝐿|𝛽|𝑈| is such that 𝐴10.

Lemma 4.5 (see [14]). Suppose 𝐴=𝐼𝐿𝑈 is a GCO(𝜋,𝑞,𝑟), where 𝐿 and 𝑈 are strictly lower and upper triangular matrices, respectively. Let 𝐵𝑤 be the block iteration matrix of the SOR method given by (2.3). If 0<𝑤<2, then the block SOR method converges, that is, 𝜌(𝐵𝑤)<1.

Theorem 4.6 (see [14]). Suppose 𝐴=𝐼𝐿𝑈 is a GCO(𝜋,𝑞,𝑟), where 𝐿 and 𝑈 are strictly lower and upper triangular matrices, respectively. Let 𝐵𝑤 and 𝐵𝑤 be the iteration matrices of the SOR method given by (2.3) and (2.11), respectively. If 0<𝑤<2, then (i)𝐵𝜌(𝑤)<𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)<1,(ii)𝐵𝜌(𝑤)=𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)=1,(iii)𝐵𝜌(𝑤)>𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)>1.Using the results and definitions stated above, we can prove the following lemma and theorems in relation to the spectral radius of the iteration matrices of the preconditioned group methods and their unpreconditioned counterparts.

Lemma 4.7. Suppose 𝐴𝑚=𝐼𝑚𝐿𝑚𝑈𝑚 is a GCO(𝜋,𝑞,𝑟), where 𝐿𝑚 and 𝑈𝑚 are strictly lower and upper triangular matrices, respectively. Let 𝑇𝑤 be the block iteration matrix of the SOR method given by (3.5). If 1𝑤<2, then the block SOR method converges, that is, 𝜌(𝑇𝑤)<1.

Proof. Let the matrix 𝐴𝑚 with partitioning 𝜋 be given as in (3.3) and let the block SOR iteration matrix 𝑇𝑤 be given as in (3.5).
Set 𝐵𝑤=||𝐼𝑤𝐿𝑚||1||||𝐼1𝑤𝑚||𝑈+|𝑤|𝑚||.(4.6) Clearly, we can see that |𝑇𝑤|<𝐵𝑤 and hence we can conclude that 𝜌(𝑇𝑤)𝜌(𝐵𝑤).
Consider the matrix 𝐴𝑛𝑖,𝑛𝑖𝜋,𝑝 defined by 𝐴=𝑀𝑚𝑁𝑚,(4.7) where 𝑀𝑚=𝐼𝑚|𝑤||𝐿𝑚| and 𝑁𝑚=|1𝑤|𝐼𝑚+|𝑤||𝑈𝑚|. It is easily seen that 𝑀𝑚 is nonsingular and 𝐵𝑤=𝑀1𝑚𝑁𝑚. Moreover, since 𝑀1𝑚0 and 𝑁𝑚0, 𝑀𝑚𝑁𝑚 is a regular splitting of 𝐴 (cf.[11]). For 𝑤 satisfying the condition 1𝑤<2, Lemma 4.4 implies that 𝐴10. Therefore, recalling Theorem 4.1 above, we have 𝜌(𝐵𝑤)<1. Hence, 𝜌(𝑇𝑤)<1, which completes the proof.

The result of Lemma 4.7 enables us to prove the following theorem

Theorem 4.8. Suppose 𝐴𝑚=𝐼𝑚𝐿𝑚𝑈𝑚 is a GCO(𝜋,𝑞,𝑟), where 𝐿𝑚 and 𝑈𝑚 are strictly lower and upper triangular matrices, respectively. Let 𝑇𝑤 and 𝑇𝑤 be the iteration matrices of the SOR method given by (3.5) and (3.13), respectively. If 1𝑤<2, then (i)𝑇𝜌(𝑤)<𝜌(𝑇𝑤) if 𝜌(𝑇𝑤)<1,(ii)𝑇𝜌(𝑤)=𝜌(𝑇𝑤) if 𝜌(𝑇𝑤)=1,(iii)𝑇𝜌(𝑤)>𝜌(𝑇𝑤) if 𝜌(𝑇𝑤)>1.

Proof. From Lemma 4.7 and since the matrix 𝐴𝑚 of (3.3) is a GCO(𝜋,𝑞,𝑟) and 𝑇𝑤=(𝐼𝑤𝐿𝑚)1{(1𝑤)𝐼𝑚+𝑤𝑈𝑚}, there exists a positive vector 𝑦 such that 𝑇𝑤𝑦=𝜆𝑦,(4.8) where  𝜆=𝜌(𝑇𝑤) or equivalently (1𝑤)𝐼𝑚+𝑤𝑈𝑚𝐼𝑦=𝜆𝑚𝑤𝐿𝑚𝑦.(4.9) Also, since 𝑇𝑤=𝐼𝑚𝐿𝑤𝑚+𝑆𝐿𝑚1(1𝑤)𝐼𝑚𝑈+𝑤𝑚𝑆+𝑆𝑈𝑚,(4.10) we can write 𝑇𝑤𝐼𝑦𝜆𝑦=𝑚𝐿𝑤𝑚+𝑆𝐿𝑚1(1𝑤)𝐼𝑚𝑈+𝑤𝑚𝑆+𝑆𝑈𝑚𝐼𝜆𝑚𝐿𝑤𝑚+𝑆𝐿𝑚𝑦.(4.11) Rearrange (4.11), we can get 𝑇𝑤𝐼𝑦𝜆𝑦=𝑚𝐿𝑤𝑚+𝑆𝐿𝑚1(1𝑤)𝐼𝑚𝑈+𝑤𝑚+𝜆𝐿𝑚𝐼𝜆𝑚𝑤𝑆𝐿𝑚𝑆𝑈+𝑤𝑚+𝐼𝑚𝑦.(4.12) But from (4.9), we have 𝜆𝑤𝐿𝑚+𝑤𝑈𝑚𝑦=(𝜆1+𝑤)𝐼𝑚𝑦.(4.13) Therefore, (4.12) can be written as 𝑇𝑤𝐼𝑦𝜆𝑦=𝑚𝐿𝑤𝑚+𝑆𝐿𝑚1𝑤𝑆𝐼𝑚+𝜆𝐿𝑚+𝑈𝑚𝑦.(4.14) Hence, for 1𝑤<2 and from [10], we can get(i)𝜆<1, then 𝑇𝑤𝑦𝜆𝑦<0 and from Theorem 4.6 we have 𝑇𝜌(𝑤)<𝜌(𝑇𝑤),(ii)𝜆=1, then 𝑇𝑤𝑦=𝜆𝑦 and from Theorem 4.6 we have 𝑇𝜌(𝑤)=𝜌(𝑇𝑤)=1,(iii)𝜆>1, then 𝑇𝑤𝑦𝜆𝑦>0 and from Theorem 4.6 we have 𝑇𝜌(𝑤)>𝜌(𝑇𝑤).
Thus, the proof is complete.

Theorem 4.9. Suppose 𝐴=𝐼𝐿𝑈 and 𝐴𝑚=𝐼𝑚𝐿𝑚𝑈𝑚 are GCO(𝜋,𝑞,𝑟), where 𝐿, 𝐿𝑚, 𝑈 and 𝑈𝑚 are strictly lower and upper triangular matrices of 𝐴 and 𝐴𝑚, respectively. Let 𝐵𝑤, 𝐵𝑤, 𝑇𝑤 and 𝑇𝑤 be the iteration matrices of the SOR method given by (2.3), (2.11), (3.5), and (3.13), respectively. If 1𝑤<2, then (i)𝑇𝜌(𝑤)<𝜌(𝑇𝑤𝐵)<𝜌(𝑤)<𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)<1,(ii)𝑇𝜌(𝑤)=𝜌(𝑇𝑤𝐵)=𝜌(𝑤)=𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)=1,(iii)𝑇𝜌(𝑤)>𝜌(𝑇𝑤𝐵)>𝜌(𝑤)>𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)>1.

Proof. In the same manner of the proof of Theorem 4.8 and since the matrix 𝐴 of (2.9) is a GCO(𝜋,𝑞,𝑟), see [13], and 𝐵𝑤={𝐼𝑤(𝐿+𝑆𝐿)}1[(1𝑤)𝐼+𝑤(𝑈𝑆+𝑆𝑈)], there exists a positive vector 𝑣 such that 𝐵𝑤𝑣=𝜆𝑣,(4.15) where 𝐵𝜆=𝜌𝑤.(4.16) Equation (4.15) can be written as (1𝑤)𝐼+𝑤𝑈𝑆+𝑆𝑈𝑣=𝜆𝐼𝑤𝐿+𝑆𝐿𝑣.(4.17) Also, since 𝑇𝑤=(𝐼𝑚𝑤𝐿𝑚)1{(1𝑤)𝐼𝑚+𝑤𝑈𝑚}, we can write 𝑇𝑤𝑣𝐼𝜆𝑣=𝑚𝑤𝐿𝑚1(1𝑤)𝐼𝑚+𝑤𝑈𝑚𝜆𝐼𝑚𝑤𝐿𝑚𝑣=𝐼𝑚𝑤𝐿𝑚11𝑤𝜆𝐼𝑚+𝑤𝜆𝐿𝑚+𝑈𝑚𝑣.(4.18) But, from (4.17) we have 𝑤𝜆𝐿+𝑈+𝑆𝜆𝐿+𝑈𝐼𝑣=𝐼𝜆1+𝑤𝑣.(4.19) Thus, from (4.19) and since 𝐴𝑚 of (3.3) is a GCO(𝜋,𝑞,𝑟) matrix, we can get 𝑤𝜆𝐿𝑚+𝑈𝑚+𝑆𝜆𝐿𝑚+𝑈𝑚𝐼𝑚𝑣=𝐼𝜆1+𝑤𝑚𝑣.(4.20) Equation (4.18) can then be written as𝑇𝑊𝑣𝐼𝜆𝑣=𝑚𝑤𝐿𝑚1𝑤𝑆𝜆𝐿𝑚𝑈𝑚𝐼𝑚𝑣.(4.21) Hence, we can conclude that, for 1𝑤<2, if(a)𝜆<1, then 𝑇𝑤𝑣𝜆𝑣<0 and from Lemma 4.7 we have 𝜌(𝑇𝑤𝐵)<𝜌(𝑤),(b)𝜆=1, then 𝑇𝑤𝑣=𝜆𝑣 and from Lemma 4.7 we have 𝜌(𝑇𝑤𝐵)=𝜌(𝑤)=1,(c)𝜆>1, then 𝑇𝑤𝑣𝜆𝑣>0 and from Lemma 4.7 we have 𝜌(𝑇𝑤𝐵)>𝜌(𝑤).
In consequence of the above, for 1𝑤<2 and from Theorems 4.6 and 4.8, we have(i)𝑇𝜌(𝑤)<𝜌(𝑇𝑤𝐵)<𝜌(𝑤)<𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)<1,(ii)𝑇𝜌(𝑤)=𝜌(𝑇𝑤𝐵)=𝜌(𝑤)=𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)=1,(iii)𝑇𝜌(𝑤)>𝜌(𝑇𝑤𝐵)>𝜌(𝑤)>𝜌(𝐵𝑤) if 𝜌(𝐵𝑤)>1,and the theorem is proved.

In view of Theorem 4.9, the superiority of the preconditioned MEDG SOR over the unpreconditioned MEDG SOR, EDG SOR methods and also preconditioned EDG SOR are confirmed for certain relaxation parameters lying in an optimum range.

5. Numerical Experiments and Discussion of Results

To further confirm the results obtained in Theorems 4.8 and 4.9, several experiments were carried out on the following model problem with Dirichlet boundary conditions: 𝜕2𝑢𝜕𝑥2+𝜕2𝑢𝜕𝑦2=𝑥2+𝑦2𝑒𝑥𝑦,𝑢(𝑥,0)=1,𝑢(0,𝑦)=1,𝑢(𝑥,1)=𝑒𝑥,𝑢(1,𝑦)=𝑒𝑦.(5.1) This problem has an exact solution 𝑢(𝑥,𝑦)=𝑒𝑥𝑦 with the unit square as the solution domain. The values of 𝑢 were calculated using different mesh sizes, 34, 86, 118, 186, and 222. The tolerance was set to be 𝜀=5×106. The experimental optimum relaxation parameter 𝑤 was obtained by running the programs repeatedly and choosing the values which gave the fastest rate of convergence. The computer processing unit was Intel(R) Core(TM) 2Duo with memory of 3Gb and the software used to implement and generate the results was Developer C++ Version 4.9.9.2. Tables 1 and 2 display the corresponding number of iterations (𝑘), optimum execution times (𝑡), and the maximum errors (𝑒) for the unpreconditioned and preconditioned methods of EDG SOR and MEDG SOR, respectively.

tab1
Table 1: Comparison of performances for the original EDG SOR and MEDG SOR.
tab2
Table 2: Comparison of performances for the Preconditioned EDG SOR and MEDG SOR.

From the results in Table 1, it is obvious that the original MEDG SOR method is superior to the EDG SOR method in terms of the number of iterations and computing times. The superiority of the preconditioned MEDG SOR over the preconditioned EDG SOR was also depicted in Table 2. The preconditioned EDG SOR was also outperformed by the unpreconditioned MEDG as shown in Figure 1 since the spectral radius of the latter is smaller than the former as proven in Theorem 4.9. From the numerical results, it is also apparent that the preconditioned MEDG SOR scheme requires the least computing effort amongst the four methods in terms of number of iterations and execution times due to its smallest spectral radius value amongst the four schemes.

867598.fig.001
Figure 1: Number of iterations (k) for the four methods for different mesh sizes N.

Figure 1 shows the number of iterations needed for convergence for the unpreconditioned and preconditioned methods which were shown to be in agreement with the theoretical results obtained in Theorem 4.9.

6. Conclusion

In this paper, we present a theoretical convergence analysis of a specific splitting-type preconditioner in block formulation applied to the linear systems resulted from a class of group iterative schemes specifically the EDG SOR and the MEDG SOR schemes. We have shown that the spectral radius of the iteration matrix of the preconditioned MEDG SOR method is the smallest compared to the unpreconditioned MEDG SOR, EDG SOR, and preconditioned EDG SOR methods provided that the relaxation parameter 𝜔[1,2). This work confirms the superiority of the preconditioned MEDG SOR method theoretically and experimentally in terms of convergence rates among this class of group iterative methods.

Acknowledgment

The authors acknowledge the Fundamental Research Grant Scheme (203/PMATHS/6711188) for the completion of this article.

References

  1. A. R. Abdullah, “The four point explicit decoupled group (EDG) method: a fast poisson solver,” International Journal of Computer Mathematics, vol. 38, pp. 61–70, 1991.
  2. D. J. Evans and W. S. Yousif, “The implementation of the explicit block iterative methods on the balance 8000 parallel computer,” Parallel Computing, vol. 16, no. 1, pp. 81–97, 1990. View at Scopus
  3. M. M. Martins, W. S. Yousif, and D. J. Evans, “Explicit group AOR method for solving elliptic partial differential equations,” Neural, Parallel & Scientific Computations, vol. 10, no. 4, pp. 411–422, 2002.
  4. M. Othman and A. R. Abdullah, “Efficient four points modified explicit group Poisson solver,” International Journal of Computer Mathematics, vol. 76, no. 2, pp. 203–217, 2000. View at Publisher · View at Google Scholar · View at Scopus
  5. W. S. Yousif and D. J. Evans, “Explicit group over-relaxation methods for solving elliptic partial differential equations,” Mathematics and Computers in Simulation, vol. 28, no. 6, pp. 453–466, 1986. View at Publisher · View at Google Scholar · View at Scopus
  6. N. H. M. Ali and K. F. Ng, “Modified explicit decoupled group method in the solution of 2-D elliptic PDEs,” in Proceedings of the 12th WSEAS International Conference on Applied Mathematics, pp. 162–167, Cairo, Egypt, December2007.
  7. N. H. M. Ali and K. F. Ng, “A new iterative elliptic PDE solver on a distributed PC cluster,” in Proceedings of the 9th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT'08), pp. 47–53, Dunedin, New Zealand, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. A. D. Gunawardena, S. K. Jain, and L. Snyder, “Modified iterative methods for consistent linear systems,” Linear Algebra and Its Applications, vol. 154–156, pp. 123–143, 1991. View at Publisher · View at Google Scholar · View at Scopus
  9. S. C. Lee, Point and group iterative method accelerated techniques for solving the Poisson problem [M.S. thesis], USM, Penang, Malaysia, 2006.
  10. M. M. Martins, D. J. Evans, and W. Yousif, “Further results on the preconditioned SOR method,” International Journal of Computer Mathematics, vol. 77, no. 4, pp. 603–610, 2001. View at Publisher · View at Google Scholar · View at Scopus
  11. M. Usui, T. Kohno, and H. Niki, “On the preconditioned SOR method,” International Journal of Computer Mathematics, vol. 59, no. 1, pp. 123–130, 1995. View at Publisher · View at Google Scholar
  12. A. M. Saeed and N. H. M. Ali, “Preconditioned (I+S¯) group iterative methods on rotated grids,” European Journal of Scientific Research, vol. 37, no. 2, pp. 278–287, 2009. View at Scopus
  13. A. M. Saeed and N. H. M. Ali, “Preconditioned modified explicit decoupled group method in the solution of elliptic PDEs,” Applied Mathematical Sciences, vol. 4, no. 21–24, pp. 1165–1181, 2010. View at Scopus
  14. A. M. Saeed and N. H. M. Ali, “On the convergence of the preconditioned group rotated iterative methods in the solution of elliptic PDEs,” Applied Mathematics & Information Sciences, vol. 5, no. 1, pp. 65–73, 2011.
  15. R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, USA, 1962.
  16. W. S. Yousif and D. J. Evans, “Explicit de-coupled group iterative methods and their parallel implementation,” Parallel Algorithms and Applications, vol. 7, no. 1-2, pp. 53–71, 1995. View at Publisher · View at Google Scholar
  17. Y. G. Saridakis, “Generalized consistent orderings and the accelerated overrelaxation method,” BIT Numerical Mathematics, vol. 26, no. 3, pp. 369–376, 1986. View at Publisher · View at Google Scholar · View at Scopus