Abstract and Applied Analysis

Abstract and Applied Analysis / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 867598 | https://doi.org/10.1155/2012/867598

Norhashidah Hj. Mohd Ali, Abdulkafi Mohammed Saeed, "Convergence Analysis of the Preconditioned Group Splitting Methods in Boundary Value Problems", Abstract and Applied Analysis, vol. 2012, Article ID 867598, 14 pages, 2012. https://doi.org/10.1155/2012/867598

Convergence Analysis of the Preconditioned Group Splitting Methods in Boundary Value Problems

Academic Editor: Ravshan Ashurov
Received17 May 2012
Accepted12 Jul 2012
Published11 Sep 2012

Abstract

The construction of a specific splitting-type preconditioner in block formulation applied to a class of group relaxation iterative methods derived from the centred and rotated (skewed) finite difference approximations has been shown to improve the convergence rates of these methods. In this paper, we present some theoretical convergence analysis on this preconditioner specifically applied to the linear systems resulted from these group iterative schemes in solving an elliptic boundary value problem. We will theoretically show the relationship between the spectral radiuses of the iteration matrices of the preconditioned methods which affects the rate of convergence of these methods. We will also show that the spectral radius of the preconditioned matrices is smaller than that of their unpreconditioned counterparts if the relaxation parameter is in a certain optimum range. Numerical experiments will also be presented to confirm the agreement between the theoretical and the experimental results.

1. Introduction

Consider the finite difference discretisation schemes for solving the following boundary value problem which is the two-dimensional Poisson equation with Dirichlet boundary conditions: 𝑢𝑥𝑥+𝑢𝑦𝑦=𝑓(𝑥,𝑦),(𝑥,𝑦)∈Ω,𝑢(𝑥,𝑦)=𝑔(𝑥,𝑦),(𝑥,𝑦)∈𝜕Ω.(1.1) Here, Ω is a continuous unit square solution domain with boundary 𝜕Ω. This equation plays a very important role in the modelers of fluid flow phenomena and heat conduction problems. Let Ω be discretised uniformly in both 𝑥 and 𝑦 directions with a mesh size ℎ=1/𝑛, where 𝑛 is an integer. The simplest finite difference approximation of the Laplacian is 𝑢𝑖+1,𝑗−2𝑢𝑖𝑗+𝑢𝑖−1,ğ‘—â„Ž2+𝑢𝑖,𝑗+1−2𝑢𝑖𝑗+𝑢𝑖,𝑗−1ℎ2−1ℎ122𝜕4𝑢𝜕𝑥4+𝜕4𝑢𝜕𝑦4î‚¶î€·â„Žâˆ’ğ‘‚4=𝑓𝑖𝑗.(1.2) Here, 𝑢𝑖𝑗=𝑢(𝑥𝑖,𝑦𝑗). Another approximation to (1.1) can be derived from the rotated five-point finite-difference approximation to give [1] 𝑢𝑖+1,𝑗+1+𝑢𝑖−1,𝑗−1+𝑢𝑖+1,𝑗−1+𝑢𝑖−1,𝑗+1−4𝑢𝑖𝑗2ℎ2−ℎ212𝜕4𝑢𝜕𝑥2𝜕𝑦2+1𝜕124𝑢𝜕𝑥4+𝜕4𝑢𝜕𝑦4î€·â„Žî‚¶î‚¹âˆ’ğ‘‚4=𝑓𝑖𝑗.(1.3) Based on the latter approximation, improved point and group iterative schemes have been developed over the last few years in solving several types of partial differential equations [1–5]. In particular, the Modified Explicit Decoupled Group (MEDG) method [6, 7] was formulated as the latest addition to this family of four-point explicit group methods in solving the Poisson equation. This method has been shown to be the fastest among the existing explicit group methods due to its lesser computational complexity.

Since it is well known that preconditioners play a vital role in accelerating the convergence rates of iterative methods, several preconditioned strategies have been used for improving the convergence rate of the explicit group methods derived from the standard and skewed (rotated) finite difference operators [8–11]. In particular, Saeed and Ali [12–14] presented an (𝐼+𝑆)-type preconditioning matrix applied to the systems obtained from the four-point Explicit Decoupled Group (EDG) and the Modified Explicit Decoupled Group (MEDG) methods for solving the elliptic partial differential equation, where 𝑆 is obtained by taking the first upper diagonal groups of the iteration matrix of the original system. The numerical experiments performed on these methods were seen to yield very encouraging results. However, no detailed studies of the spectral radius analysis of all these preconditioned systems have been done to confirm the superiority of this preconditioner.

The focus of this study is to establish the convergence properties of the preconditioned systems based on the splitting-type preconditioner (𝐼+𝑆) for improving the performance and reliability of this family of explicit group methods derived from the rotated finite-difference formula. We will prove that this type of preconditioner applied to the MEDG SOR can minimize the most the spectral radius of the preconditioned matrix provided the relaxation parameter is in a certain optimum range. This paper is organised as follows: in Section 2, we give a presentation of the preconditioner applied to the system resulted from the EDG SOR method. A brief description of the application of the preconditioner in block formulation to the MEDG SOR is given in Section 3. The theoretical convergence analysis of these methods is discussed in Section 4. In Section 5, we give a numerical example to confirm the results obtained in Section 4. Finally, we report a brief conclusion in Section 6.

2. Preconditioned Explicit Decoupled Group SOR (EDG SOR)

For convenience, we will now briefly explain some of the definitions used in this paper.

Definition 2.1 (see [15]). A matrix 𝐴 of order 𝑛 has property 𝐴 if there exists two disjoint subsets 𝑆 and 𝑇 of 𝑊={1,2,…,𝑛} such that if 𝑖≠𝑗 and if either ğ‘Žğ‘–ğ‘—â‰ 0 and ğ‘Žğ‘–ğ‘—â‰ 0, then 𝑖∈𝑆 and 𝑗∈𝑇 or else 𝑖∈𝑇 and 𝑗∈𝑆.

Definition 2.2 (see [3]). An ordered grouping 𝜋 of 𝑊={1,2,…,𝑛} is a subdivision of 𝑊 into disjoint subsets 𝑅1,𝑅2,…,ğ‘…ğ‘ž such that 𝑅1+𝑅2+⋯+ğ‘…ğ‘ž=𝑊. Given a matrix 𝐴 and an ordered grouping 𝜋, we define the submatrices 𝐴𝑚,𝑛 for 𝑚,𝑛=1,2,â€¦ğ‘ž as follows: 𝐴𝑚,𝑛 is formed from 𝐴 deleting all rows except those corresponding to 𝑅𝑚 and all columns except those corresponding to 𝑅𝑛.

Definition 2.3 (see [3]). Let 𝜋 be an ordered grouping with ğ‘ž groups. A matrix 𝐴 has Property 𝐴(𝜋) if the ğ‘žÃ—ğ‘ž matrix 𝑍=(𝑧𝑟,𝑠) defined by 𝑧𝑟,𝑠={0 if 𝐴𝑟,𝑠=0 or 1 if 𝐴𝑟,𝑠≠0} has Property 𝐴.

Definition 2.4 (see [15]). A matrix 𝐴 of order 𝑛 is consistently ordered if for some t there exist disjoint subsets 𝑆1,𝑆2,…,𝑆𝑡 of 𝑊={1,2,…,𝑛} such that ∑𝑡𝑘=1𝑆𝑘=𝑊 and such that if 𝑖 and 𝑗 are associated, then 𝑗∈𝑆𝑘+1 if 𝑗>𝑖 and 𝑗∈𝑆𝑘−1 if 𝑗<𝑖, where 𝑆𝑘 is the subset containing i.
Note that a matrix 𝐴 is a 𝜋-consistently ordered matrix if the matrix 𝑍 in Definition 2.3 is consistently ordered.
From the discretisation of the EDG finite-difference formula in solving the Poisson equation, the linear system 𝐴𝑢∼=𝑏∼(2.1) is obtained with [1] âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…ğ´=0𝑅1𝑅2𝑅0𝑅1𝑅2𝑅0⋱⋱⋱𝑅1𝑅2𝑅0⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦(𝑁−1)2/2×(𝑁−1)2/2,𝑅0=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…00𝑅01𝑅02𝑅00⋱⋱⋱𝑅01𝑅02𝑅00⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−1)×(𝑁−1),𝑅00=⎡⎢⎢⎣11−4−141⎤⎥⎥⎦,𝑅01=⎡⎢⎢⎣−10040⎤⎥⎥⎦,𝑅02=𝑅𝑇01,𝑅1=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…01𝑅01𝑅01⋱⋱𝑅01𝑅01⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−1)×(𝑁−1),𝑅2=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…02𝑅02𝑅02𝑅⋱⋱02𝑅02⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−1)×(𝑁−1).(2.2) Let 𝐴∈⊄𝑛𝑖,𝑛𝑖𝜋,𝑝, (𝑝=𝑁−1) be written as 𝐴=𝐷−𝐸−𝐹, where 𝐷=diag(𝐴11,𝐴22,…,𝐴𝑝,𝑝), 𝐸 and 𝐹 are strict block lower triangular, and strict block upper triangular parts of 𝐴. Here, the diagonal entries 𝐴𝑖𝑖 are nonsingular and ⊄𝑛𝑖,𝑛𝑖𝜋,𝑝 denotes the set of all matrices in ⊄𝑛𝑖,𝑛𝑖𝜋,𝑝 which is of the form (2.1) relative to some given block partitioning 𝜋. The block Jacobi iteration matrix is 𝐵𝐽=𝐷−1(𝐸+𝐹)=𝐿+𝑈, where 𝐿=𝐷−1𝐸,𝑈=𝐷−1𝐹, the block Gauss-Seidel iteration matrix is 𝐵GS=(𝐼−𝐿)−1𝑈, and the Block Successive Over-Relaxation method (BSOR) iteration matrix is 𝐵ℓ𝑤=(𝐼−𝑤𝐿)−1{(1−𝑤)𝐼+𝑤𝑈}.(2.3) Since the matrix 𝐴 of (2.1) is a 𝜋-consistently ordered and possesses property 𝐴(𝜋), therefore, the theory of block SOR is valid for this iterative method [1].
The theoretical optimum relaxation factor 𝜔𝑝 for implementing the group SOR iterative scheme can be computed from the formula: 𝜔𝑝=21+√1−𝜌2,(𝐽)(2.4) where 𝜌(𝐽) is the spectral radius of the group Jacobian iterative matrix. Yousif and Evans [16] gave a good estimate of the spectral radius for the EDG method: 7𝜌(𝐽)=1−6𝜋2ℎ2.(2.5) In an effort to further accelerate the convergence rates of this method, Saeed and Ali [12] applied a preconditioner 𝑃 to the linear system (2.1) and transformed it into an equivalent system: 𝑃𝐴𝑢∼=𝑃𝑏∼(2.6) with 𝑃=(𝐼+𝑆), where 𝐼 is the identity matrix which has the same dimension as 𝐴 while 𝑆 is obtained by taking the first upper diagonal groups of 𝑅0 in the original system above as the following: âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘ğ‘†=1𝑍1⋱𝑍1⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−1)2/2×(𝑁−1)2/2,𝑍1=⎡⎢⎢⎢⎢⎢⎢⎢⎣0∼−𝑅010∼⋱⋱−𝑅010∼⎤⎥⎥⎥⎥⎥⎥⎥⎦(𝑁−1)×(𝑁−1).(2.7) Here, 0∼ is a (2×2) null matrix.
The system (2.1) becomes 𝐼+𝑆𝐴𝑢∼=𝐼+𝑆𝑏∼.(2.8) Hence, we have the linear system of equations: 𝐴𝑢∼=𝑏∼(2.9) with 𝐴=𝐼+𝑆𝐴=𝐼−𝐿−𝑆𝐿−𝑈−𝑆+,𝑆𝑈𝑏∼=𝐼+𝑆𝑏∼.(2.10) The SOR iteration matrix of this scheme is called the Modified Block Successive Over-Relaxation iteration matrix (MBSOR) and is given by 𝐵ℓ𝑤=𝐼−𝑤𝐿+𝑆𝐿−1(1−𝑤)𝐼+𝑤𝑈−𝑆+.𝑆𝑈(2.11) The matrix 𝐴 of (2.9) is 𝜋-consistently ordered and possesses property 𝐴(𝜋) [13].

3. Preconditioned-Modified Explicit Decoupled Group SOR (MEDG SOR)

Using the MEDG approximation formula in discretising the Poisson equation, the following system is obtained [6]: 𝐴𝑚𝑢=𝑏𝑚,(3.1) where 𝐴𝑚=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…ğ‘š0𝑅𝑚1𝑅𝑚2𝑅𝑚0𝑅𝑚1𝑅𝑚2𝑅𝑚0⋱⋱⋱𝑅𝑚1𝑅𝑚2𝑅𝑚0⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦(𝑁−2)2/2×(𝑁−2)2/2,𝑅𝑚0=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…ğ‘š00𝑅𝑚01𝑅𝑚02𝑅𝑚00⋱⋱⋱𝑅𝑚01𝑅𝑚02𝑅𝑚00⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−2)×(𝑁−2),𝑅𝑚00=⎡⎢⎢⎣11−4−141⎤⎥⎥⎦,𝑅𝑚01=⎡⎢⎢⎣−10040⎤⎥⎥⎦,𝑅𝑚02=𝑅𝑇𝑚01,𝑅𝑚1=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…ğ‘š01𝑅𝑚01𝑅𝑚01⋱⋱𝑅𝑚01𝑅𝑚01⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−2)×(𝑁−2),𝑅𝑚2=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘…ğ‘š02𝑅𝑚02𝑅𝑚02𝑅⋱⋱𝑚02𝑅𝑚02⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−2)×(𝑁−2).(3.2) It is observed that the partitioning of 𝐴𝑚 is in the following block form: 𝐴𝑚=âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ´ğ‘š11𝐴𝑚12𝐴𝑚21𝐴𝑚22𝐴𝑚23𝐴𝑚32𝐴𝑚33⋱⋱⋱𝐴𝑚(𝑝−1)𝑝𝐴𝑚𝑝(𝑝−1)ğ´ğ‘šğ‘ğ‘âŽ¤âŽ¥âŽ¥âŽ¥âŽ¥âŽ¥âŽ¥âŽ¥âŽ¥âŽ¦(3.3) with 𝑝=(𝑁−2), where 𝐴𝑚𝑖𝑖∈⊄𝑛𝑖,𝑛𝑖𝜋,𝑝, 𝑖=1,2,…,𝑝, and ∑𝑝𝑖=1𝑛𝑖=𝑛. Let 𝐴𝑚=𝐷𝑚−𝐸𝑚−𝐹𝑚, where 𝐷𝑚=diag(𝐴𝑚11,𝐴𝑚22,...,𝐴𝑚𝑝𝑝) and 𝐸𝑚=𝐸𝑚𝑖𝑗=−𝐴𝑚𝑖𝑗𝐹for𝑗<𝑖0for𝑗≥𝑖𝑚=𝐹𝑚𝑖𝑗=−𝐴𝑚𝑖𝑗for𝑗>𝑖0for𝑗≤𝑖(3.4) are block matrices consisting of the block diagonal, strict block lower triangular, and strict block upper triangular parts of 𝐴𝑚. Here, the diagonal entries 𝐴𝑚𝑖𝑖 are nonsingular. The block Jacobi iteration matrix is 𝐵𝐽(𝐴𝑚)=𝐷𝑚−1(𝐸𝑚+𝐹𝑚)=𝐿𝑚+𝑈𝑚, where 𝐿𝑚=𝐷𝑚−1𝐸𝑚, 𝑈𝑚=𝐷𝑚−1𝐹𝑚, while the block Gauss-Seidel iteration matrix is 𝐵GS(𝐴𝑚)=(𝐼𝑚−𝐿𝑚)−1𝑈𝑚. The Block Successive Over-Relaxation method (BSOR) iteration matrix is, therefore, 𝑇ℓ𝑤=𝐼𝑚−𝑤𝐿𝑚−1(1−𝑤)𝐼𝑚+𝑤𝑈𝑚.(3.5) Since the matrix 𝐴𝑚 of (3.3) is 𝜋-consistently ordered and possesses property 𝐴(𝜋), the theory of block SOR is also valid for this iterative method and, therefore, is convergent [6].

Similarly, the theoretical optimum relaxation factor 𝜔𝑝 for implementing this group SOR-iterative scheme can be obtained from (2.4). In view of the fact that the grid spacing ℎMEDG=2ℎEDG, an estimate of the spectral radius of the group Jacobian iterative matrix of the MEDG method may be obtained from (2.5) as 𝜌(𝐽)=1−143𝜋2ℎ2.(3.6) Good agreement between the theoretical estimates and experimental values of the optimum relaxation parameters was observed in our numerical experiments. Upon applying the left-sided preconditioner 𝑃=(𝐼+𝑆) to system (3.1), the following system is obtained [13]: 𝑃𝐴𝑚𝑢=𝑃𝑏𝑚(3.7) with î‚âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘ ğ‘†=1𝑠1⋱𝑠1⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−2)2/2×(𝑁−2)2/2,𝑠1=⎡⎢⎢⎢⎢⎢⎢⎢⎣0∼−𝑅𝑚010∼⋱⋱−𝑅𝑚010∼⎤⎥⎥⎥⎥⎥⎥⎥⎦(𝑁−2)×(𝑁−2),(3.8) where 0∼ is a (2×2) null matrix.

The preconditioner 𝑆𝐼+ is of the following form: î‚âŽ¡âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ¢âŽ£ğ‘ ğ¼+𝑆=2𝑠2⋱𝑠2⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−2)2/2×(𝑁−2)2/2,𝑠2=⎡⎢⎢⎢⎢⎢⎢⎣𝐼0−𝑅𝑚01𝐼0⋱⋱−𝑅𝑚01𝐼0⎤⎥⎥⎥⎥⎥⎥⎦(𝑁−2)×(𝑁−2)(3.9) Here, 𝐼0 is a (2×2) identity matrix and the system (3.7) becomes 𝑆𝐴𝐼+𝑚𝑢=𝑆𝑏𝐼+𝑚.(3.10) Hence, 𝐴𝑚𝑢=̃𝑏𝑚,(3.11) where 𝐴𝑚=𝑆𝐴𝐼+𝑚=𝐼𝑚−𝐿𝑚−𝑆𝐿𝑚−𝑈𝑚−𝑆+𝑆𝑈𝑚,̃𝑏𝑚=𝑆𝑏𝐼+𝑚.(3.12) The SOR iteration matrix will result in an Improved Modified Block Successive Over-Relaxation iteration matrix (IMBSOR) and is given by 𝑇ℓ𝑤=𝐼𝑚𝐿−𝑤𝑚+𝑆𝐿𝑚−1(1−𝑤)𝐼𝑚𝑈+𝑤𝑚−𝑆+𝑆𝑈𝑚.(3.13)

4. Convergence Properties of the Preconditioned Group Methods

In this section, we will derive several properties related to the convergence of the preconditioned methods discussed in Sections 2 and 3. We will begin with the presentation of several preliminary relevant theorems and lemmas which are needed for the proof of the convergence properties. The spectral radius of a matrix is denoted by 𝜌(⋅), which is defined as the largest of the moduli of the eigenvalues of the iteration matrix.

Theorem 4.1 (see [15]). If 𝐴=𝑀−𝑁 is a regular splitting of the matrix 𝐴 and 𝐴−1≥0, then 𝜌𝑀−1𝑁=𝜌𝐴−1𝑁𝐴1+𝜌−1𝑁<1.(4.1) Thus, an iterative method with coefficient matrix 𝑀−1𝑁 is convergent for any initial vector 𝑥(0).

An accurate analysis of convergence properties of the SOR method is possible if the matrix 𝐴 is consistently ordered in the following sense (see [17]).

Definition 4.2. A matrix 𝐴 is a generalized (ğ‘ž,𝑟)-consistently ordered matrix (a GCO(ğ‘ž,𝑟)-matrix) if Δ=det(ğ›¼ğ‘žğ¸+𝛼−𝑟𝐹−𝑘𝐷) is independent of 𝛼 for all 𝛼≠0 and for all 𝑘. Here, 𝐷=diag𝐴 and 𝐸 and 𝐹 are strictly lower and strictly upper triangular matrices, respectively, such that 𝐴=𝐷−𝐸−𝐹.

Definition 4.3 (see [17]). A matrix 𝐴 of the form (3.3) is said to be generally consistently ordered (𝜋,ğ‘ž,𝑟) or simply GCO(𝜋,ğ‘ž,𝑟), where ğ‘ž and 𝑟 are positive integers, if for the partitioning 𝜋 of 𝐴, the diagonal submatrices 𝐴𝑖𝑖,𝑖=1,2,…,𝑝(≥2), are nonsingular, and the eigenvalues of 𝐵𝐽(𝛼)=𝛼𝑟𝐿+ğ›¼âˆ’ğ‘žğ‘ˆ(4.2) are independent of 𝛼, for all 𝛼≠0, where 𝐿 and 𝑈 are strict block lower and upper triangular parts of 𝐴 respectively.
For any matrix 𝐶=(𝑐𝑖𝑗) in ⊄𝑛𝑖,𝑛𝑖𝜋,𝑝, let |𝐶| denote the block matrix in ⊄𝑛𝑖,𝑛𝑖𝜋,𝑝 with entries|𝑐𝑖,𝑗|. Given the matrix 𝐵𝐽=𝐿+𝑈,(4.3) then 𝜇 denotes the spectral radius of the matrix: ||𝐵𝐽||=||||𝐿+𝑈,(4.4) so that ||𝐵𝜇=𝜌𝐽||.(4.5)

Lemma 4.4 (see [17]). Let |𝐵𝐽| of (4.4) be a GCO (ğ‘ž,𝑟)-matrix and 𝑝=ğ‘ž+𝑟. Then, for any real nonnegative constant 𝛼, 𝛽, and 𝛾 with 𝛾≠0 satisfying: ğ›¼ğ‘Ÿğ›½ğ‘žğœ‡ğ‘<𝛾𝑝, the matrix ğ´î…žî…ž=𝛾𝐼−𝛼|𝐿|−𝛽|𝑈| is such that ğ´î…žî…žâˆ’1≥0.

Lemma 4.5 (see [14]). Suppose 𝐴=𝐼−𝐿−𝑈 is a GCO(𝜋,ğ‘ž,𝑟), where −𝐿 and −𝑈 are strictly lower and upper triangular matrices, respectively. Let 𝐵ℓ𝑤 be the block iteration matrix of the SOR method given by (2.3). If 0<𝑤<2, then the block SOR method converges, that is, 𝜌(𝐵ℓ𝑤)<1.

Theorem 4.6 (see [14]). Suppose 𝐴=𝐼−𝐿−𝑈 is a GCO(𝜋,ğ‘ž,𝑟), where −𝐿 and −𝑈 are strictly lower and upper triangular matrices, respectively. Let 𝐵ℓ𝑤 and 𝐵ℓ𝑤 be the iteration matrices of the SOR method given by (2.3) and (2.11), respectively. If 0<𝑤<2, then (i)𝐵𝜌(ℓ𝑤)<𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)<1,(ii)𝐵𝜌(ℓ𝑤)=𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)=1,(iii)𝐵𝜌(ℓ𝑤)>𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)>1.Using the results and definitions stated above, we can prove the following lemma and theorems in relation to the spectral radius of the iteration matrices of the preconditioned group methods and their unpreconditioned counterparts.

Lemma 4.7. Suppose 𝐴𝑚=𝐼𝑚−𝐿𝑚−𝑈𝑚 is a GCO(𝜋,ğ‘ž,𝑟), where −𝐿𝑚 and −𝑈𝑚 are strictly lower and upper triangular matrices, respectively. Let 𝑇ℓ𝑤 be the block iteration matrix of the SOR method given by (3.5). If 1≤𝑤<2, then the block SOR method converges, that is, 𝜌(𝑇ℓ𝑤)<1.

Proof. Let the matrix 𝐴𝑚 with partitioning 𝜋 be given as in (3.3) and let the block SOR iteration matrix 𝑇ℓ𝑤 be given as in (3.5).
Set ğµî…žâ„“ğ‘¤=||𝐼−𝑤𝐿𝑚||−1||||𝐼1−𝑤𝑚||𝑈+|𝑤|𝑚||.(4.6) Clearly, we can see that |𝑇ℓ𝑤|<ğµî…žâ„“ğ‘¤ and hence we can conclude that 𝜌(𝑇ℓ𝑤)≤𝜌(ğµî…žâ„“ğ‘¤).
Consider the matrix 𝐴′∈⊄𝑛𝑖,𝑛𝑖𝜋,𝑝 defined by 𝐴′=𝑀𝑚−𝑁𝑚,(4.7) where 𝑀𝑚=𝐼𝑚−|𝑤||𝐿𝑚| and 𝑁𝑚=|1−𝑤|𝐼𝑚+|𝑤||𝑈𝑚|. It is easily seen that 𝑀𝑚 is nonsingular and ğµî…žâ„“ğ‘¤=𝑀1𝑚𝑁𝑚. Moreover, since 𝑀1𝑚≥0 and 𝑁𝑚≥0, 𝑀𝑚−𝑁𝑚 is a regular splitting of 𝐴′ (cf.[11]). For 𝑤 satisfying the condition 1≤𝑤<2, Lemma 4.4 implies that 𝐴′−1≥0. Therefore, recalling Theorem 4.1 above, we have 𝜌(ğµî…žâ„“ğ‘¤)<1. Hence, 𝜌(𝑇ℓ𝑤)<1, which completes the proof.

The result of Lemma 4.7 enables us to prove the following theorem

Theorem 4.8. Suppose 𝐴𝑚=𝐼𝑚−𝐿𝑚−𝑈𝑚 is a GCO(𝜋,ğ‘ž,𝑟), where −𝐿𝑚 and −𝑈𝑚 are strictly lower and upper triangular matrices, respectively. Let 𝑇ℓ𝑤 and 𝑇ℓ𝑤 be the iteration matrices of the SOR method given by (3.5) and (3.13), respectively. If 1≤𝑤<2, then (i)𝑇𝜌(ℓ𝑤)<𝜌(𝑇ℓ𝑤) if 𝜌(𝑇ℓ𝑤)<1,(ii)𝑇𝜌(ℓ𝑤)=𝜌(𝑇ℓ𝑤) if 𝜌(𝑇ℓ𝑤)=1,(iii)𝑇𝜌(ℓ𝑤)>𝜌(𝑇ℓ𝑤) if 𝜌(𝑇ℓ𝑤)>1.

Proof. From Lemma 4.7 and since the matrix 𝐴𝑚 of (3.3) is a GCO(𝜋,ğ‘ž,𝑟) and 𝑇ℓ𝑤=(𝐼−𝑤𝐿𝑚)−1{(1−𝑤)𝐼𝑚+𝑤𝑈𝑚}, there exists a positive vector 𝑦 such that 𝑇ℓ𝑤𝑦=𝜆𝑦,(4.8) where  𝜆=𝜌(𝑇ℓ𝑤) or equivalently (1−𝑤)𝐼𝑚+𝑤𝑈𝑚𝐼𝑦=𝜆𝑚−𝑤𝐿𝑚𝑦.(4.9) Also, since 𝑇ℓ𝑤=𝐼𝑚𝐿−𝑤𝑚+𝑆𝐿𝑚−1(1−𝑤)𝐼𝑚𝑈+𝑤𝑚−𝑆+𝑆𝑈𝑚,(4.10) we can write 𝑇ℓ𝑤𝐼𝑦−𝜆𝑦=𝑚𝐿−𝑤𝑚+𝑆𝐿𝑚−1(1−𝑤)𝐼𝑚𝑈+𝑤𝑚−𝑆+𝑆𝑈𝑚𝐼−𝜆𝑚𝐿−𝑤𝑚+𝑆𝐿𝑚𝑦.(4.11) Rearrange (4.11), we can get 𝑇ℓ𝑤𝐼𝑦−𝜆𝑦=𝑚𝐿−𝑤𝑚+𝑆𝐿𝑚−1(1−𝑤)𝐼𝑚𝑈+𝑤𝑚+𝜆𝐿𝑚𝐼−𝜆𝑚−𝑤𝑆𝐿𝑚𝑆𝑈+𝑤𝑚+𝐼𝑚𝑦.(4.12) But from (4.9), we have 𝜆𝑤𝐿𝑚+𝑤𝑈𝑚𝑦=(𝜆−1+𝑤)𝐼𝑚𝑦.(4.13) Therefore, (4.12) can be written as 𝑇ℓ𝑤𝐼𝑦−𝜆𝑦=𝑚𝐿−𝑤𝑚+𝑆𝐿𝑚−1𝑤𝑆𝐼𝑚+𝜆𝐿𝑚+𝑈𝑚𝑦.(4.14) Hence, for 1≤𝑤<2 and from [10], we can get(i)𝜆<1, then 𝑇ℓ𝑤𝑦−𝜆𝑦<0 and from Theorem 4.6 we have 𝑇𝜌(ℓ𝑤)<𝜌(𝑇ℓ𝑤),(ii)𝜆=1, then 𝑇ℓ𝑤𝑦=𝜆𝑦 and from Theorem 4.6 we have 𝑇𝜌(ℓ𝑤)=𝜌(𝑇ℓ𝑤)=1,(iii)𝜆>1, then 𝑇ℓ𝑤𝑦−𝜆𝑦>0 and from Theorem 4.6 we have 𝑇𝜌(ℓ𝑤)>𝜌(𝑇ℓ𝑤).
Thus, the proof is complete.

Theorem 4.9. Suppose 𝐴=𝐼−𝐿−𝑈 and 𝐴𝑚=𝐼𝑚−𝐿𝑚−𝑈𝑚 are GCO(𝜋,ğ‘ž,𝑟), where −𝐿, −𝐿𝑚, −𝑈 and −𝑈𝑚 are strictly lower and upper triangular matrices of 𝐴 and 𝐴𝑚, respectively. Let 𝐵ℓ𝑤, 𝐵ℓ𝑤, 𝑇ℓ𝑤 and 𝑇ℓ𝑤 be the iteration matrices of the SOR method given by (2.3), (2.11), (3.5), and (3.13), respectively. If 1≤𝑤<2, then (i)𝑇𝜌(ℓ𝑤)<𝜌(𝑇ℓ𝑤𝐵)<𝜌(ℓ𝑤)<𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)<1,(ii)𝑇𝜌(ℓ𝑤)=𝜌(𝑇ℓ𝑤𝐵)=𝜌(ℓ𝑤)=𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)=1,(iii)𝑇𝜌(ℓ𝑤)>𝜌(𝑇ℓ𝑤𝐵)>𝜌(ℓ𝑤)>𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)>1.

Proof. In the same manner of the proof of Theorem 4.8 and since the matrix 𝐴 of (2.9) is a GCO(𝜋,ğ‘ž,𝑟), see [13], and 𝐵ℓ𝑤={𝐼−𝑤(𝐿+𝑆𝐿)}−1[(1−𝑤)𝐼+𝑤(𝑈−𝑆+𝑆𝑈)], there exists a positive vector 𝑣 such that 𝐵ℓ𝑤𝑣=𝜆𝑣,(4.15) where 𝐵𝜆=𝜌ℓ𝑤.(4.16) Equation (4.15) can be written as (1−𝑤)𝐼+𝑤𝑈−𝑆+𝑆𝑈𝑣=𝜆𝐼−𝑤𝐿+𝑆𝐿𝑣.(4.17) Also, since 𝑇ℓ𝑤=(𝐼𝑚−𝑤𝐿𝑚)−1{(1−𝑤)𝐼𝑚+𝑤𝑈𝑚}, we can write 𝑇ℓ𝑤𝑣−𝐼𝜆𝑣=𝑚−𝑤𝐿𝑚−1(1−𝑤)𝐼𝑚+𝑤𝑈𝑚−𝜆𝐼𝑚−𝑤𝐿𝑚𝑣=𝐼𝑚−𝑤𝐿𝑚−11−𝑤−𝜆𝐼𝑚+𝑤𝜆𝐿𝑚+𝑈𝑚𝑣.(4.18) But, from (4.17) we have 𝑤𝜆𝐿+𝑈+𝑆𝜆𝐿+𝑈−𝐼𝑣=𝐼𝜆−1+𝑤𝑣.(4.19) Thus, from (4.19) and since 𝐴𝑚 of (3.3) is a GCO(𝜋,ğ‘ž,𝑟) matrix, we can get 𝑤𝜆𝐿𝑚+𝑈𝑚+𝑆𝜆𝐿𝑚+𝑈𝑚−𝐼𝑚𝑣=𝐼𝜆−1+𝑤𝑚𝑣.(4.20) Equation (4.18) can then be written as𝑇ℓ𝑊𝑣−𝐼𝜆𝑣=𝑚−𝑤𝐿𝑚−1𝑤𝑆−𝜆𝐿𝑚−𝑈𝑚−𝐼𝑚𝑣.(4.21) Hence, we can conclude that, for 1≤𝑤<2, if(a)𝜆<1, then 𝑇ℓ𝑤𝑣−𝜆𝑣<0 and from Lemma 4.7 we have 𝜌(𝑇ℓ𝑤𝐵)<𝜌(ℓ𝑤),(b)𝜆=1, then 𝑇ℓ𝑤𝑣=𝜆𝑣 and from Lemma 4.7 we have 𝜌(𝑇ℓ𝑤𝐵)=𝜌(ℓ𝑤)=1,(c)𝜆>1, then 𝑇ℓ𝑤𝑣−𝜆𝑣>0 and from Lemma 4.7 we have 𝜌(𝑇ℓ𝑤𝐵)>𝜌(ℓ𝑤).
In consequence of the above, for 1≤𝑤<2 and from Theorems 4.6 and 4.8, we have(i)𝑇𝜌(ℓ𝑤)<𝜌(𝑇ℓ𝑤𝐵)<𝜌(ℓ𝑤)<𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)<1,(ii)𝑇𝜌(ℓ𝑤)=𝜌(𝑇ℓ𝑤𝐵)=𝜌(ℓ𝑤)=𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)=1,(iii)𝑇𝜌(ℓ𝑤)>𝜌(𝑇ℓ𝑤𝐵)>𝜌(ℓ𝑤)>𝜌(𝐵ℓ𝑤) if 𝜌(𝐵ℓ𝑤)>1,and the theorem is proved.

In view of Theorem 4.9, the superiority of the preconditioned MEDG SOR over the unpreconditioned MEDG SOR, EDG SOR methods and also preconditioned EDG SOR are confirmed for certain relaxation parameters lying in an optimum range.

5. Numerical Experiments and Discussion of Results

To further confirm the results obtained in Theorems 4.8 and 4.9, several experiments were carried out on the following model problem with Dirichlet boundary conditions: 𝜕2𝑢𝜕𝑥2+𝜕2𝑢𝜕𝑦2=𝑥2+𝑦2𝑒𝑥𝑦,𝑢(𝑥,0)=1,𝑢(0,𝑦)=1,𝑢(𝑥,1)=𝑒𝑥,𝑢(1,𝑦)=𝑒𝑦.(5.1) This problem has an exact solution 𝑢(𝑥,𝑦)=𝑒𝑥𝑦 with the unit square as the solution domain. The values of 𝑢 were calculated using different mesh sizes, 34, 86, 118, 186, and 222. The tolerance was set to be 𝜀=5×10−6. The experimental optimum relaxation parameter 𝑤 was obtained by running the programs repeatedly and choosing the values which gave the fastest rate of convergence. The computer processing unit was Intel(R) Core(TM) 2Duo with memory of 3Gb and the software used to implement and generate the results was Developer C++ Version 4.9.9.2. Tables 1 and 2 display the corresponding number of iterations (𝑘), optimum execution times (𝑡), and the maximum errors (𝑒) for the unpreconditioned and preconditioned methods of EDG SOR and MEDG SOR, respectively.



Unpreconditioned methods
𝑁 EDG SORMEDG SOR
  𝑤 𝑘 𝑡 (secs) 𝑒 𝑤 𝑘 𝑡 (secs) 𝑒

34 1.753 49 0 4 . 1 1 𝐸 − 0 6 1.586 21 0 5 . 0 0 𝐸 − 0 6
86 1.894 122 0.047 4 . 8 8 𝐸 − 0 6 1.812 42 0.023 4 . 5 7 𝐸 − 0 6
118 1.921 166 0.078 4 . 8 9 𝐸 − 0 6 1.862 50 0.035 2 . 0 8 𝐸 − 0 6
186 1.943 233 0.187 4 . 7 9 𝐸 − 0 6 1.914 81 0.064 4 . 3 6 𝐸 − 0 6
222 1.948 256 0.327 4 . 7 8 𝐸 − 0 6 1.931 96 0.145 1 . 8 4 𝐸 − 0 6




Preconditioned methods
𝑁 Preconditioned EDG SORPreconditioned MEDG SOR
  𝑤 𝑘 𝑡 (secs) 𝑒 𝑤 𝑘 𝑡 (secs) 𝑒

34 1.679 41 0 3 . 8 5 𝐸 − 0 6 1.442 16 0 3 . 4 7 𝐸 − 0 6
86 1.757 88 0.034 3 . 8 7 𝐸 − 0 6 1.5882 30 0.008 3 . 0 9 𝐸 − 0 6
118 1.774 106 0.051 4 . 4 7 𝐸 − 0 6 1.642 43 0.016 3 . 5 5 𝐸 − 0 6
186 1.782 151 0.133 4 . 6 8 𝐸 − 0 6 1.684 59 0.025 4 . 2 4 𝐸 − 0 6
222 1.795 168 0.198 4 . 1 5 𝐸 − 0 6 1.671 72 0.078 3 . 9 9 𝐸 − 0 6

From the results in Table 1, it is obvious that the original MEDG SOR method is superior to the EDG SOR method in terms of the number of iterations and computing times. The superiority of the preconditioned MEDG SOR over the preconditioned EDG SOR was also depicted in Table 2. The preconditioned EDG SOR was also outperformed by the unpreconditioned MEDG as shown in Figure 1 since the spectral radius of the latter is smaller than the former as proven in Theorem 4.9. From the numerical results, it is also apparent that the preconditioned MEDG SOR scheme requires the least computing effort amongst the four methods in terms of number of iterations and execution times due to its smallest spectral radius value amongst the four schemes.

Figure 1 shows the number of iterations needed for convergence for the unpreconditioned and preconditioned methods which were shown to be in agreement with the theoretical results obtained in Theorem 4.9.

6. Conclusion

In this paper, we present a theoretical convergence analysis of a specific splitting-type preconditioner in block formulation applied to the linear systems resulted from a class of group iterative schemes specifically the EDG SOR and the MEDG SOR schemes. We have shown that the spectral radius of the iteration matrix of the preconditioned MEDG SOR method is the smallest compared to the unpreconditioned MEDG SOR, EDG SOR, and preconditioned EDG SOR methods provided that the relaxation parameter 𝜔∈[1,2). This work confirms the superiority of the preconditioned MEDG SOR method theoretically and experimentally in terms of convergence rates among this class of group iterative methods.

Acknowledgment

The authors acknowledge the Fundamental Research Grant Scheme (203/PMATHS/6711188) for the completion of this article.

References

  1. A. R. Abdullah, “The four point explicit decoupled group (EDG) method: a fast poisson solver,” International Journal of Computer Mathematics, vol. 38, pp. 61–70, 1991. View at: Google Scholar
  2. D. J. Evans and W. S. Yousif, “The implementation of the explicit block iterative methods on the balance 8000 parallel computer,” Parallel Computing, vol. 16, no. 1, pp. 81–97, 1990. View at: Google Scholar
  3. M. M. Martins, W. S. Yousif, and D. J. Evans, “Explicit group AOR method for solving elliptic partial differential equations,” Neural, Parallel & Scientific Computations, vol. 10, no. 4, pp. 411–422, 2002. View at: Google Scholar
  4. M. Othman and A. R. Abdullah, “Efficient four points modified explicit group Poisson solver,” International Journal of Computer Mathematics, vol. 76, no. 2, pp. 203–217, 2000. View at: Publisher Site | Google Scholar
  5. W. S. Yousif and D. J. Evans, “Explicit group over-relaxation methods for solving elliptic partial differential equations,” Mathematics and Computers in Simulation, vol. 28, no. 6, pp. 453–466, 1986. View at: Publisher Site | Google Scholar
  6. N. H. M. Ali and K. F. Ng, “Modified explicit decoupled group method in the solution of 2-D elliptic PDEs,” in Proceedings of the 12th WSEAS International Conference on Applied Mathematics, pp. 162–167, Cairo, Egypt, December2007. View at: Google Scholar
  7. N. H. M. Ali and K. F. Ng, “A new iterative elliptic PDE solver on a distributed PC cluster,” in Proceedings of the 9th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT'08), pp. 47–53, Dunedin, New Zealand, December 2008. View at: Publisher Site | Google Scholar
  8. A. D. Gunawardena, S. K. Jain, and L. Snyder, “Modified iterative methods for consistent linear systems,” Linear Algebra and Its Applications, vol. 154–156, pp. 123–143, 1991. View at: Publisher Site | Google Scholar
  9. S. C. Lee, Point and group iterative method accelerated techniques for solving the Poisson problem [M.S. thesis], USM, Penang, Malaysia, 2006.
  10. M. M. Martins, D. J. Evans, and W. Yousif, “Further results on the preconditioned SOR method,” International Journal of Computer Mathematics, vol. 77, no. 4, pp. 603–610, 2001. View at: Publisher Site | Google Scholar
  11. M. Usui, T. Kohno, and H. Niki, “On the preconditioned SOR method,” International Journal of Computer Mathematics, vol. 59, no. 1, pp. 123–130, 1995. View at: Publisher Site | Google Scholar
  12. A. M. Saeed and N. H. M. Ali, “Preconditioned (I+S¯) group iterative methods on rotated grids,” European Journal of Scientific Research, vol. 37, no. 2, pp. 278–287, 2009. View at: Google Scholar
  13. A. M. Saeed and N. H. M. Ali, “Preconditioned modified explicit decoupled group method in the solution of elliptic PDEs,” Applied Mathematical Sciences, vol. 4, no. 21–24, pp. 1165–1181, 2010. View at: Google Scholar
  14. A. M. Saeed and N. H. M. Ali, “On the convergence of the preconditioned group rotated iterative methods in the solution of elliptic PDEs,” Applied Mathematics & Information Sciences, vol. 5, no. 1, pp. 65–73, 2011. View at: Google Scholar
  15. R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, USA, 1962.
  16. W. S. Yousif and D. J. Evans, “Explicit de-coupled group iterative methods and their parallel implementation,” Parallel Algorithms and Applications, vol. 7, no. 1-2, pp. 53–71, 1995. View at: Publisher Site | Google Scholar
  17. Y. G. Saridakis, “Generalized consistent orderings and the accelerated overrelaxation method,” BIT Numerical Mathematics, vol. 26, no. 3, pp. 369–376, 1986. View at: Publisher Site | Google Scholar

Copyright © 2012 Norhashidah Hj. Mohd Ali and Abdulkafi Mohammed Saeed. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views918
Downloads394
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.