Abstract

A relaxed splitting preconditioner based on matrix splitting is introduced in this paper for linear systems of saddle point problem arising from numerical solution of the incompressible Navier-Stokes equations. Spectral analysis of the preconditioned matrix is presented, and numerical experiments are carried out to illustrate the convergence behavior of the preconditioner for solving both steady and unsteady incompressible flow problems.

1. Introduction

We consider systems of linear equations arising from the finite-element discretization of the incompressible Navier-Stokes equations governing the flow of viscous Newtonian fluids. The primitive variables formulation of the Navier-Stokes equations is ๐œ•๐ฎ],[],[],๐œ•๐‘กโˆ’๐œฮ”๐ฎ+(๐ฎโ‹…โˆ‡)๐ฎ+โˆ‡๐‘=๐Ÿonฮฉร—(0,๐‘‡(1.1)div๐ฎ=0onฮฉร—0,๐‘‡(1.2)๐ฎ=๐ on๐œ•ฮฉร—0,๐‘‡(1.3)๐ฎ(๐ฑ,0)=๐ฎ0(๐ฑ)onฮฉ,(1.4) where ฮฉโŠ‚โ„2 is an open bounded domain with sufficiently smooth boundary ๐œ•ฮฉ, [0,๐‘‡] is an time interval of interest, ๐ฎ(๐ฑ,๐‘ก) and ๐‘(๐ฑ,๐‘ก) are unknown velocity and pressure fields, ๐œ is the kinematic viscosity, ฮ” is the vector Laplacian, โˆ‡ is the gradient, div is the divergence, and ๐Ÿ, ๐ , and ๐ฎ0 are given functions. The Stokes problem is obtained by dropping the nonlinearity (๐ฎโ‹…โˆ‡)๐ฎ from the momentum equation (1.1). Refer to [1] for an introduction to the numerical solution of the Navier-Stokes equations. Implicit time discretization and linearization of the Navier-Stokes equations by Picard or Newton fixed iteration result in a sequence of (generalized) Oseen problems. The Oseen problems by spatial discretization with LBB-stable finite elements (see [1, 2]) are reduced to a series of large sparse systems of linear equations with a saddle point matrix structure as follows: ๎‚๐€๐ฑ=๐›,(1.5) with ๎‚โŽกโŽขโŽขโŽฃ๐€=๐€๐๐‘‡โŽคโŽฅโŽฅโŽฆโŽ›โŽœโŽœโŽ๐ฎ๐‘โŽžโŽŸโŽŸโŽ โŽ›โŽœโŽœโŽ๐ŸโŽžโŽŸโŽŸโŽ โˆ’๐0,๐ฑ=,๐›=โˆ’๐‘”,(1.6) where ๐ฎ and ๐‘ represent the discrete velocity and pressure, respectively. In two-dimensional cases, ๐€=diag(๐ด1,๐ด2) denotes the discretization of the reaction diffusion, and each diagonal submatrix ๐ด๐‘– is a scalar discrete convection-diffusion operator represented as ๐ด๐‘–=๐œŽ๐‘‰+๐œ๐ฟ+๐‘๐‘–(๐‘–=1,2),(1.7) where ๐‘‰ denotes the velocity mass matrix, ๐ฟ the discrete (negative) Laplacian, and ๐‘๐‘– the convective terms. The matrix ๐€ is positive definite in the sense that ๐€๐‘‡+๐€ is symmetric positive definite. Matrix ๐๐‘‡=(๐ต๐‘‡1,๐ต๐‘‡2) denotes the discrete gradient with ๐ต๐‘‡1, ๐ต๐‘‡2 being discretizations of the partial derivatives ๐œ•/๐œ•๐‘ฅ, ๐œ•/๐œ•๐‘ฆ, respectively. ๐Ÿ=(๐‘“1,๐‘“2)๐‘‡ and ๐‘” contain the forcing and boundary terms.

In the past few years, a considerable amount of work has been spent in developing efficient solvers for systems of linear equations in the form of (1.5); see [3] for a comprehensive survey. Here we consider preconditioned Krylov subspace methods, in particular preconditioned GMRES [4] in this paper. The convergence performance of this method is mainly determined by the underlying preconditioner employed. An important class of preconditioners is based on the block LU factorization of the coefficient matrix, including a variety of block diagonal and triangular preconditioners. A crucial ingredient in all these preconditioners is an approximation to the Schur complement ๐’=๐๐€โˆ’1๐๐‘‡. This class of preconditioners includes the pressure convection diffusion (PCD) preconditioner, the least-squares commutator (LSC) preconditioner, and their variants [5โ€“7]. Somewhat related to this class of preconditioners are those based on the augmented Lagrangian (AL) reformulation of the saddle point problem; see [8โ€“11]. Other types of preconditioners for the saddle point problems include those based on the Hermitian and skew-Hermitian splitting (HSS) [12โ€“15] and the dimensional splitting (DS) [16] of the coefficient matrix ๎‚๐€. In [17], a relaxed dimensional factorization preconditioner is introduced.

The remainder of the paper is organized as follows. In Section 2, we present a relaxed splitting preconditioner based on matrix splitting and prove that the preconditioned matrix has eigenvalue 1 of algebraic multiplicity at least ๐‘› (recall that ๐‘› is the number of velocity degrees of freedom). In Section 3, we show the results of a series of numerical experiments indicating the convergence behavior of the relaxed splitting preconditioner. In the final section, we draw our conclusions.

2. A Relaxed Splitting Preconditioner

2.1. A Splitting of the Matrix

In this paper, we limit to 2D case. The system matrix ๎‚๐€ admits the following splitting: ๎‚โŽกโŽขโŽขโŽขโŽขโŽฃ๐ด๐€=10๐ต๐‘‡10๐ด2๐ต๐‘‡2โˆ’๐ต1โˆ’๐ต20โŽคโŽฅโŽฅโŽฅโŽฅโŽฆ=โŽกโŽขโŽขโŽขโŽขโŽฃ๐ด100000โˆ’๐ต1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆ+โŽกโŽขโŽขโŽขโŽขโŽฃ0000๐ต๐‘‡10๐ด2๐ต๐‘‡20โˆ’๐ต20โŽคโŽฅโŽฅโŽฅโŽฅโŽฆ=๐ป+๐‘†,(2.1) where ๐ด1โˆˆโ„๐‘›1ร—๐‘›1, ๐ด2โˆˆโ„๐‘›2ร—๐‘›2, ๐ต1โˆˆโ„๐‘šร—๐‘›1, and ๐ต2โˆˆโ„๐‘šร—๐‘›2. Thus, ๎‚๐€โˆˆโ„(๐‘›+๐‘š)ร—(๐‘›+๐‘š) is of dimension ๐‘›=๐‘›1+๐‘›2. Let ๐›ผ>0 be a parameter and denote by ๐ผ the identity matrix of order ๐‘›1+๐‘›2+๐‘š. Then, ๐ป+๐›ผ๐ผ and ๐‘†+๐›ผ๐ผ are both nonsingular, nonsymmetric, and positive definite. Consider the two splittings of ๎‚๐€: ๎‚๎‚๐€=(๐ป+๐›ผ๐ผ)โˆ’(๐›ผ๐ผโˆ’๐‘†),๐€=(๐‘†+๐›ผ๐ผ)โˆ’(๐›ผ๐ผโˆ’๐ป).(2.2) Associated to these splittings is the alternating iteration, ๐‘˜=0,1,โ€ฆ, (๐ป+๐›ผ๐ผ)๐ฑ๐‘˜+1/2=(๐›ผ๐ผโˆ’๐‘†)๐ฑ๐‘˜+๐›,(๐‘†+๐›ผ๐ผ)๐ฑ๐‘˜+1=(๐›ผ๐ผโˆ’๐ป)๐ฑ๐‘˜+1/2+๐›.(2.3) Eliminating ๐ฑ๐‘˜+1/2 from these, we can rewrite (2.3) as the stationary scheme: ๐ฑ๐‘˜+1=๐‘‡๐›ผ๐ฑ๐‘˜+๐œ,๐‘˜=0,1,โ€ฆ,(2.4) where ๐‘‡๐›ผ=(๐‘†+๐›ผ๐ผ)โˆ’1(๐›ผ๐ผโˆ’๐ป)(๐ป+๐›ผ๐ผ)โˆ’1(๐›ผ๐ผโˆ’๐‘†)(2.5) is the iteration matrix and ๐œ=2๐›ผ(๐‘†+๐›ผ๐ผ)โˆ’1(๐ป+๐›ผ๐ผ)โˆ’1. The iteration matrix ๐‘‡๐›ผ can be rewritten as follows: ๐‘‡๐›ผ=(๐‘†+๐›ผ๐ผ)โˆ’1(๐ป+๐›ผ๐ผ)โˆ’1(๐›ผ๐ผโˆ’๐ป)(๐›ผ๐ผโˆ’๐‘†)=(๐‘†+๐›ผ๐ผ)โˆ’1(๐ป+๐›ผ๐ผ)โˆ’1๎‚ƒ๎‚๐€๎‚„๎‚ƒ1(๐›ผ๐ผ+๐ป)(๐›ผ๐ผ+๐‘†)โˆ’2๐›ผ=๐ผโˆ’๎‚„2๐›ผ(๐ป+๐›ผ๐ผ)(๐‘†+๐›ผ๐ผ)โˆ’1๎‚๐€=๐ผโˆ’๐‘ƒ๐›ผโˆ’1๎‚๐€,(2.6) where ๐‘ƒ๐›ผ=(1/2๐›ผ)(๐ป+๐›ผ๐ผ)(๐‘†+๐›ผ๐ผ).

Obviously, ๐‘ƒ๐›ผ is nonsingular and ๐œ=๐‘ƒ๐›ผโˆ’1๐›. As in [18], one can show there is a unique splitting ๎‚๐€=๐‘ƒ๐›ผโˆ’๐‘„๐›ผ such that the iteration ๐‘‡๐›ผ is the matrix induced by that splitting, that is, ๐‘‡๐›ผ=๐‘ƒ๐›ผโˆ’1๐‘„๐›ผ=๐ผโˆ’๐‘ƒ๐›ผโˆ’1๎‚๐€. Matrix ๐‘„๐›ผ is given by ๐‘„๐›ผ=(1/2๐›ผ)(๐›ผ๐ผโˆ’๐ป)(๐›ผ๐ผโˆ’๐‘†).

2.2. A Relaxed Splitting Preconditioner

The relaxed splitting preconditioner is defined as follows: โŽกโŽขโŽขโŽขโŽขโŽขโŽฃ๐ด๐Œ=101๐›ผ๐ด1๐ต๐‘‡10๐ด2๐ต๐‘‡2โˆ’๐ต1โˆ’๐ต21๐›ผ๐ผโˆ’๐›ผ๐ต1๐ต๐‘‡1โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆ.(2.7) It is important to note that the preconditioner ๐Œ can be written in a factorized form as 1๐Œ=๐›ผโŽกโŽขโŽขโŽขโŽขโŽฃ๐ด1000๐›ผ๐ผ0โˆ’๐ต1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ0๐›ผ๐ผ๐›ผ๐ผ0๐ต๐‘‡10๐ด2๐ต๐‘‡20โˆ’๐ต2โŽคโŽฅโŽฅโŽฅโŽฅโŽฆ=โŽกโŽขโŽขโŽขโŽขโŽฃ๐ด๐›ผ๐ผ1000๐ผ0โˆ’๐ต1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ10๐ผ๐ผ0๐›ผ๐ต๐‘‡10๐ด2๐ต๐‘‡20โˆ’๐ต2โŽคโŽฅโŽฅโŽฅโŽฅโŽฆ=โŽกโŽขโŽขโŽขโŽขโŽฃ๐ด๐›ผ๐ผ1000๐ผ0โˆ’๐ต1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽขโŽฃ10๐ผ๐ผ0๐›ผ2๐ต๐‘‡110๐ผ๐›ผ๐ต๐‘‡2โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ๐ผ100๐ผ๐›ผ2๐ต๐‘‡1๐ต200๎๐ด20โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ100๐›ผ๐ผ๐ผ000๐ผ00โˆ’๐›ผ๐ต2๐ผโŽคโŽฅโŽฅโŽฅโŽฅโŽฆ,(2.8) where ๎๐ด2=๐ด2+(1/๐›ผ)๐ต๐‘‡2๐ต2. Note that both factors on the right-hand side are invertible provided that ๐ด1 have ๎๐ด2 have positive definite symmetric parts. Hence, the new preconditioner is nonsingular. This condition is satisfied for both Stokes and Oseen problems. We can see from (2.1) and (2.7) that the difference between ๐Œ and ๎‚๐€ is given by ๎‚โŽกโŽขโŽขโŽขโŽขโŽขโŽฃ1๐‘=๐Œโˆ’๐€=00๐›ผ๐ด1๐ต๐‘‡1โˆ’๐ต๐‘‡1100000๐›ผ๐ผโˆ’๐›ผ๐ต1๐ต๐‘‡1โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆ.(2.9) This observation suggests that ๐Œ could be a good preconditioner, since the appropriate values for the parameters involved in the new preconditioners are estimated. Furthermore, the structure of (2.9) somewhat facilitates the analysis of the eigenvalue distribution of the preconditioned matrix. In the following, we analyze the spectral properties of the preconditioned matrix ๎‚๐“=๐€๐Œโˆ’1.

Theorem 2.1. The preconditioned matrix ๎‚๐“=๐€๐Œโˆ’1 has an eigenvalue 1 with multiplicity at least ๐‘›, and the remaining eigenvalues are ๐œ†๐‘–, where ๐œ†๐‘– are the eigenvalues of an ๐‘šร—๐‘š matrix ๐‘๐›ผโˆถ=(1/๐›ผ)(๐‘†1+๐‘†2)โˆ’(1/๐›ผ2)๐‘†2๐‘†1 with ๐‘†1=๐ต1๐ด1โˆ’1๐ต๐‘‡1 and ๐‘†2=๐ต2๎๐ด2โˆ’1๐ต๐‘‡2.

Proof. First of all, from ๎๐“โˆถ=๐Œโˆ’1(๎‚๐€๐Œโˆ’1)๐Œ=๐Œโˆ’1๎‚๐€ we see that the right-preconditioned matrix ๐“ is similar to the left-preconditioned one ๎๐“, then ๐“ and ๎๐“ have the same eigenvalues. Furthermore, we have ๎๐“=๐ˆโˆ’๐Œโˆ’๐Ÿ๐‘โŽกโŽขโŽขโŽขโŽขโŽฃ01=๐ˆโˆ’๐ผ000๐ผ0๐›ผ๐ต2๐ผโŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽขโŽฃ1๐ผโˆ’๐›ผ2๐ต๐‘‡1๐ต2๎๐ด2โˆ’100๎๐ด2โˆ’10100๐›ผ๐ผโŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽขโŽฃ1๐ผ0โˆ’๐›ผ2๐ต๐‘‡110๐ผโˆ’๐›ผ๐ต๐‘‡2โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆร—โŽกโŽขโŽขโŽขโŽขโŽฃ๐ด00๐ผ1โˆ’1๐ต000๐ผ01๐ด1โˆ’1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽขโŽฃ10๐ผ00๐›ผ๐ด1๐ต๐‘‡1โˆ’๐ต๐‘‡1100000๐›ผ๐ผโˆ’๐›ผ๐ต1๐ต๐‘‡1โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆ=โŽกโŽขโŽขโŽขโŽขโŽฃ1๐ผ0โˆ—0๐ผโˆ—00๐›ผ๎€ท๐‘†1+๐‘†2๎€ธโˆ’1๐›ผ2๐‘†2๐‘†1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆ.(2.10) Therefore, from (2.10) we can see that the eigenvalues of ๐“ are given by 1 (with multiplicity at least ๐‘›=๐‘›1+๐‘›2) and by the ๐œ†๐‘–โ€™s.

Lemma 2.2. Let ๐ด๐›ผ=๎‚€๐ด1โˆ’(1/๐›ผ)๐ต๐‘‡1๐ต1โˆ’(1/๐›ผ)๐ต๐‘‡1๐ต20๐ด2๎‚โˆˆโ„๐‘›ร—๐‘›, ๐›ผ>๐œŽmax(๐ต๐‘‡1๐ต1)/๐œŽmin(๐ด1), and ๐ด1, and ๐ด2 be positive definite. Then ๐ด๐›ผ is positive definite.

Lemma 2.3. Let ๐ด๐›ผโˆˆโ„๐‘›ร—๐‘› and ๐ตโˆˆโ„๐‘šร—๐‘› (๐‘šโ‰ค๐‘›). Let ๐›ผโˆˆโ„, and assume that matrices ๐ด๐›ผ, ๐ด๐›ผ+(1/๐›ผ)๐ต๐‘‡๐ต, ๐ต๐ด๐›ผโˆ’1๐ต๐‘‡and ๐ต(๐ด๐›ผ+(1/๐›ผ)๐ต๐‘‡๐ต)โˆ’1๐ต๐‘‡ are all invertible. Then ๎‚ธ๐ต๎‚€๐ด๐›ผ+1๐›ผ๐ต๐‘‡๐ต๎‚โˆ’1๐ต๐‘‡๎‚นโˆ’1=๎€ท๐ต๐ด๐›ผโˆ’1๐ต๐‘‡๎€ธโˆ’1+1๐›ผ๐ผ.(2.11)

Theorem 2.4. Let ๐›ผ>๐œŽmax(๐ต๐‘‡1๐ต1)/๐œŽmin(๐ด1). The remaining eigenvalues ๐œ†๐‘– of ๐‘๐›ผ are of the form:๐œ†๐‘–=๐œ‡๐‘–๐›ผ+๐œ‡๐‘–,(2.12) where the ๐œ‡๐‘–โ€™s satisfy the eigenvalue problem: ๐ต๐ด๐›ผโˆ’1๐ต๐‘‡๐œ™๐‘–=๐œ‡๐‘–๐œ™๐‘–.

Proof. We note ๐‘๐›ผ=1๐›ผ๎€ท๐‘†1+๐‘†2๎€ธโˆ’1๐›ผ2๐‘†2๐‘†1=1๐›ผ๎€ท๐ต1๐ต2๎€ธโŽ›โŽœโŽœโŽ๐ด1โˆ’10โˆ’1๐›ผ๎๐ด2โˆ’1๐ต๐‘‡2๐ต1๐ด1โˆ’1๎๐ด2โˆ’1โŽžโŽŸโŽŸโŽ โŽ›โŽœโŽœโŽ๐ต๐‘‡1๐ต๐‘‡2โŽžโŽŸโŽŸโŽ =1๐›ผ๎€ท๐ต1๐ต2๎€ธโŽ›โŽœโŽœโŽ๐ด101๐›ผ๐ต๐‘‡2๐ต1๎๐ด2โŽžโŽŸโŽŸโŽ โˆ’1โŽ›โŽœโŽœโŽ๐ต๐‘‡1๐ต๐‘‡2โŽžโŽŸโŽŸโŽ =1๐›ผ๎€ท๐ต1๐ต2๎€ธโŽ›โŽœโŽœโŽโŽ›โŽœโŽœโŽ๐ด1โˆ’1๐›ผ๐ต๐‘‡1๐ต1โˆ’1๐›ผ๐ต๐‘‡1๐ต20๐ด2โŽžโŽŸโŽŸโŽ +โŽ›โŽœโŽœโŽ1๐›ผ๐ต๐‘‡1๐ต11๐›ผ๐ต๐‘‡1๐ต21๐›ผ๐ต๐‘‡2๐ต11๐›ผ๐ต๐‘‡2๐ต2โŽžโŽŸโŽŸโŽ โŽžโŽŸโŽŸโŽ โˆ’1โŽ›โŽœโŽœโŽ๐ต๐‘‡1๐ต๐‘‡2โŽžโŽŸโŽŸโŽ =1๐›ผ๐ต๎‚€๐ด๐›ผ+1๐›ผ๐ต๐‘‡๐ต๎‚โˆ’1๐ต๐‘‡.(2.13) Thus, the remaining eigenvalues are the solutions of the eigenproblem: 1๐›ผ๐ต๎‚€๐ด๐›ผ+1๐›ผ๐ต๐‘‡๐ต๎‚โˆ’1๐ต๐‘‡๐œ™๐‘–=๐œ†๐‘–๐œ™๐‘–.(2.14) By Lemma 2.3, we obtain 1๐›ผ๐œ™๐‘–=๐œ†๐‘–๎‚ต๐ต๎‚€๐ด๐›ผ+1๐›ผ๐ต๐‘‡๐ต๎‚โˆ’1๐ต๐‘‡๎‚ถโˆ’1๐œ™๐‘–=๐œ†๐‘–๎€ท๐ต๐ด๐›ผโˆ’1๐ต๐‘‡๎€ธโˆ’1๐œ™๐‘–+๐œ†๐‘–๐›ผ๐œ™๐‘–.(2.15) Hence, ๐œ†๐‘–=๐œ‡๐‘–/(๐›ผ+๐œ‡๐‘–), where ๐œ‡๐‘–โ€™s satisfy the eigenvalue problem ๐ต๐ด๐›ผโˆ’1๐ต๐‘‡๐œ™๐‘–=๐œ‡๐‘–๐œ™๐‘–.

In addition, we obtain easily that the remaining eigenvalues ๐œ†๐‘–โ†’0 as ๐›ผโ†’โˆž. Figures 1 and 2 show this behavior, that is, the nonunity eigenvalues of the preconditioned matrix are increasingly clustered at the origin as the parameters become larger.

2.3. Practical Implementation of the Relaxed Splitting Preconditioner

In this subsection, we outline the practical implementation of the relaxed splitting preconditioner in a subspace iterative method. The main step is applying the preconditioner, that is, solving linear systems with the coefficient matrix ๐Œ. From (2.8), we can see that the relaxed splitting preconditioner can be factorized as follows: โŽกโŽขโŽขโŽขโŽขโŽฃ๐ด๐Œ=1000๐ผ0โˆ’๐ต1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽขโŽฃ10๐ผ๐ผ0๐›ผ2๐ต๐‘‡110๐ผ๐›ผ๐ต๐‘‡2โŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ๐ผ100๐ผ๐›ผ2๐ต๐‘‡1๐ต200๎๐ด20โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ100๐›ผ๐ผ๐ผ000๐ผ00โˆ’๐›ผ๐ต2๐ผโŽคโŽฅโŽฅโŽฅโŽฅโŽฆ,(2.16) showing that the preconditioner requires solving two linear systems at each step, with coefficient matrices ๐ด1 and ๎๐ด2=๐ด2+(1/๐›ผ)๐ต๐‘‡2๐ต2. Several different approaches are available for solving linear systems involving ๐ด1 and ๎๐ด2. We defer the discussion of these to Section 3.

We conclude this section with a discussion of diagonal scaling. We found that scaling can be beneficial for the relaxed splitting preconditioner. Unless otherwise specified, we perform a preliminary symmetric scaling of the linear systems ๎‚๐€๐ฑ=๐› in the form ๐ƒโˆ’1/2๎‚๐€๐ƒโˆ’1/2๐ฒ=๐ƒโˆ’1/2๐› with ๐ฒ=๐ƒ1/2๐ฑ, and ๐ƒ=diag(๐ท1,๐ท2,๐ผ), where diag(๐ท1,๐ท2) is the main diagonal of the velocity submatrix ๐€. Incidentally, it is noted that diagonal scaling is very beneficial for the HSS preconditioner (see [13]) and the DS preconditioner (see [16, 17]).

3. Numerical Experiments

In this section, numerical experiments are carried out for solving the linear system coming from the finite-element discretization of the two-dimensional linearized Stokes and Oseen models of incompressible flow in order to verify the performance of our preconditioner. The test problem is the leaky lid-driven cavity problem generated by the IFISS software package [19]. We used a zero initial guess and stopped the iteration when ||๐ซ๐‘˜||2/||๐›||2โ‰ค10โˆ’6, where ๐ซ๐‘˜ is the residual vector. The relaxed splitting preconditioner is combined with restarted GMRES(m). We set ๐‘š=30.

We consider the 2D leaky lid-driven cavity problem discretized by the finite-element method on uniform grids [1]. The subproblems arising from the application of the relaxed splitting preconditioner are solved by direct methods. We use AMD reordering technique [20, 21] for the degrees of freedom that makes the application of the Cholesky (for Stokes) or LU (for Oseen) factorization of ๐ด1 and ๎๐ด2 relatively fast. For simplicity, we use ๐›ผ=100 for all numerical experiments.

In Table 1, we show iteration counts (referred to as โ€œitsโ€) for the relaxed splitting preconditioned GMRES(30) when solving the steady Stokes problem on a sequence of uniform grids. We see that the iteration count is independent of mesh size involved in the Q2-Q1 and the Q2-P1 finite-element scheme. The Q2-P1 finite-element scheme has much better profile than the Q2-Q1 finite-element scheme.

In Tables 2 and 3, we show iteration counts for the steady Oseen problem on a sequence of uniform grids and for different values of ๐œ, using Picard and Newton linearization of generalized Oseen problems, respectively. We found that the relaxed splitting preconditioner has difficulties dealing with low-viscosity, that is, the number of iterations increases with the decrease in the kinematic viscosity. In this case, it appears that the Q2-P1 finite-element scheme gives faster convergence results than the Q2-Q1 finite-element scheme.

Next, we report on analogous experiments involving the generalized Stokes problem and the generalized Oseen problem. As we can see from Table 4, for the generalized Stokes problem, the results are virtually the same as those obtained in the steady case. Indeed, we can see from the results in Table 1 that the rate of convergence for the relaxed splitting preconditioned GMRES (30) is essentially independent of mesh size involved in the Q2-Q1 and the Q2-P1 finite-element schemes.

In Tables 5 and 6, for generalized Oseen problems, we compare our preconditioner with the RDF preconditioner in [17]. The RDF preconditioner can be factorized as follows: โŽกโŽขโŽขโŽขโŽขโŽขโŽฃ๐ต๐‘ƒ=๐ผ0๐‘‡1๐›ผโŽคโŽฅโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ๎๐ด0๐ผ000๐ผ1000๐ผ0โˆ’๐ต1โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ0๎๐ด0๐ผ๐ผ002๐ต๐‘‡2โŽคโŽฅโŽฅโŽฅโŽฅโŽฆโŽกโŽขโŽขโŽขโŽขโŽฃ000๐›ผ๐ผ๐ผ000๐ผ0โˆ’๐ต2๐›ผ๐ผโŽคโŽฅโŽฅโŽฅโŽฅโŽฆ,(3.1) where ๎๐ด1=๐ด1+(1/๐›ผ)๐ต๐‘‡1๐ต1 and ๎๐ด2=๐ด2+(1/๐›ผ)๐ต๐‘‡2๐ต2. It shows that RDF preconditioner requires solving two linear systems at each step. The new preconditioner requires solving linear systems with ๐ด1 and ๎๐ด2 at each step. We can see that the linear system with ๐ด1 is easier to solve than that with ๎๐ด1. From Tables 5 and 6, we can see for 128 ร— 128 grid with different viscosities that the RDF preconditioner leads to slightly less iteration counts than the new preconditioner, but the new preconditioner is slightly faster in terms of elapsed CPU time.

From Figures 3 and 4, we found that for the relaxed splitting preconditioner the intervals containing values of parameter ๐›ผ are very wide. Those imply that the relaxed splitting preconditioner is not sensitive to the value of parameter. Noting that the optimal parameters of the relaxed splitting preconditioner are always larger than 50, we can always take ๐›ผ=100 to obtain essentially optimal results.

4. Conclusions

In this paper, we have described a relaxed splitting preconditioner for the linear systems arising from discretizations of the Navier-Stokes equations and analyzed the spectral properties of the preconditioned matrix. The numerical experiments show good performance on a wide range of cases. We use direct methods for the solution of inner linear systems, but it is not a good idea to solve larger 2D or 3D problems at the constraint of memory and time requirement. In this case, exact solve can be replaced with inexact solve, which requires further research in the future.

Acknowledgments

This research is supported by NSFC (60973015 and 61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), and Sichuan Province Sci. & Tech. Research Project (12ZC1802).