Abstract

By using Sherman-Morrison-Woodbury formula, we introduce a preconditioner based on parameterized splitting idea for generalized saddle point problems which may be singular and nonsymmetric. By analyzing the eigenvalues of the preconditioned matrix, we find that when α is big enough, it has an eigenvalue at 1 with multiplicity at least , and the remaining eigenvalues are all located in a unit circle centered at 1. Particularly, when the preconditioner is used in general saddle point problems, it guarantees eigenvalue at 1 with the same multiplicity, and the remaining eigenvalues will tend to 1 as the parameter . Consequently, this can lead to a good convergence when some GMRES iterative methods are used in Krylov subspace. Numerical results of Stokes problems and Oseen problems are presented to illustrate the behavior of the preconditioner.

1. Introduction

In some scientific and engineering applications, such as finite element methods for solving partial differential equations [1, 2], and computational fluid dynamics [3, 4], we often consider solutions of the generalized saddle point problems of the form where , , and are positive semidefinite, ,  and . When , (1) is a general saddle point problem which is also a researching object for many authors.

It is well known that when the matrices , and are large and sparse, the iterative methods are more efficient and attractive than direct methods assuming that (1) has a good preconditioner. In recent years, a lot of preconditioning techniques have arisen for solving linear system; for example, Saad [5] and Chen [6] have comprehensively surveyed some classical preconditioning techniques, including ILU preconditioner, triangular preconditioner, SPAI preconditioner, multilevel recursive Schur complements preconditioner, and sparse wavelet preconditioner. Particularly, many preconditioning methods for saddle problems have been presented recently, such as dimensional splitting (DS) [7], relaxed dimensional factorization (RDF) [8], splitting preconditioner [9], and Hermitian and skew-Hermitian splitting preconditioner [10].

Among these results, Cao et al. [9] have used splitting idea to give a preconditioner for saddle point problems where the matrix is symmetric and positive definite and is of full row rank. According to his preconditioner, the eigenvalues of the preconditioned matrix would tend to 1 when the parameter . Consequently, just as we have seen from those examples of [9], preconditioner has guaranteed a good convergence when some iterative methods were used.

In this paper, being motivated by [9], we use the splitting idea to present a preconditioner for the system (1), where may be nonsymmetric and singular (when ). We find that, when the parameter is big enough, the preconditioned matrix has the eigenvalue at 1 with multiplicity at least , and the remaining eigenvalues are all located in a unit circle centered at 1. Particularly, when the precondidtioner is used in some general saddle point problems (namely, ), we see that the multiplicity of the eigenvalue at 1 is also at least , but the remaining eigenvalues will tend to 1 as the parameter .

The remainder of the paper is organized as follows. In Section 2, we present our preconditioner based on the splitting idea and analyze the bound of eigenvalues of the preconditioned matrix. In Section 3, we use some numerical examples to show the behavior of the new preconditioner. Finally, we draw some conclusions and outline our future work in Section 4.

2. A Parameterized Splitting Preconditioner

Now we consider using splitting idea with a variable parameter to present a preconditioner for the system (1).

Firstly, it is evident that when , the system (1) is equivalent to Let Then the coefficient matrix of (2) can be expressed by . Multiplying both sides of system (2) from the left with matrix , we have Hence, we obtain a preconditioned linear system from (1) using the idea of splitting and the corresponding preconditioner is Now we analyze the eigenvalues of the preconditioned system (4).

Theorem 1. The preconditioned matrix has an eigenvalue at 1 with multiplicity at least . The remaining eigenvalues satisfy where , and satisfies

Proof. Because we can easily get which implies the preconditioned matrix has an eigenvalue at 1 with multiplicity at least .
For the remaining eigenvalues, let with ; then we have By multiplying both sides of this equality from the left with , we can get
This completes the proof of Theorem 1.

Remark 2. From Theorem 1, we can get that when parameter is big enough, the modulus of nonnil eigenvalues will be located in interval .

Remark 3. In Theorem 1, if the matrix , then for nonnil eigenvalues we have

Figures 1, 2, and 3 are the eigenvalues plots of the preconditioned matrices obtained with our preconditioner. As we can see in the following numerical experiments, this good phenomenon is useful for accelerating convergence of iterative methods in Krylov subspace.

Additionally, for the purpose of practically executing our preconditioner we should efficiently deal with the computation of . This can been tackled by the well-known Sherman-Morrison-Woodbury formula: where ,  and are invertible matrices, ,  and are any matrices, and are any positive integers.

From (15) we immediately get In the following numerical examples we will always use (16) to compute in (14).

3. Numerical Examples

In this section, we give numerical experiments to illustrate the behavior of our preconditioner. The numerical experiments are done by using MATLAB 7.1. The linear systems are obtained by using finite element methods in the Stokes problems and steady Oseen problems, and they are respectively the cases of(1), which is caused by using Q2-Q1 FEM;(2), which is caused by using Q1-P0 FEM.

Furthermore, we compare our preconditioner with that of [9] in the case of general saddle point problems (namely, ). For the general saddle point problem, [9] has presented the preconditioner with as a parameter and has proved that when is symmetric positive definite, the preconditioned matrix has an eigenvalue 1 with multiplicity at , and the remaining eigenvalues satisfy where , are positive singular values of the matrix .

All these systems can be generalized by using IFISS software package [11] (this is a free package that can be downloaded from the site http://www.maths.manchester.ac.uk/~djs/ifiss/). We use restarted GMRES(20) as the Krylov subspace method, and we always take a zero initial guess. The stopping criterion is where is the residual vector at the th iteration.

In the whole course of computation, we always replace in (14) with (16) and use the factorization of to tackle , where is a corresponding vector in the iteration. Concretely, let ; then we complete the matrix-vector product by in MATLAB term. In the following tables, the denotation norm (, fro) means the Frobenius form of the matrix . The total time is the sum of LU time and iterative time, and the LU time is the time to compute LU factorization of .

Case 1 (for our preconditioner). (using Q2-Q1 FEM in Stokes problems and steady Oseen problems with different viscosity coefficients. The results are in Tables 1, 2, 3, 4).

Case 1 (for preconditioner of [9]). (using Q2-Q1 FEM in Stokes problems and steady Oseen problems with different viscosity coefficients. The results are in Tables 5, 6, 7, 8).

Case 2. (using Q1-P0 FEM in Stokes problems and steady Oseen problems with different viscosity coefficients. The results are in Tables 9, 10, 11, 12).

From Tables 1, 2, 3, 4, 5, 6, 7, and 8 we can see that these results are in agreement with the theoretical analyses (13) and (19), respectively. Additionally, comparing with the results in Tables 9, 10, 11, and 12, we find that, although the iterations used in Case 1 (either for the preconditioner of [9] or our preconditioner) are less than those in Case 2, the time spent by Case 1 is much more than that of Case 2. This is because the density of the coefficient matrix generalized by Q2-Q1 FEM is much larger than that generalized by Q1-P0 FEM. This can be partly illustrated by Tables 13 and 14, and the others can be illustrated similarly.

4. Conclusions

In this paper, we have introduced a splitting preconditioner for solving generalized saddle point systems. Theoretical analysis showed the modulus of eigenvalues of the preconditioned matrix would be located in interval when the parameter is big enough. Particularly when the submatrix , the eigenvalues will tend to 1 as the parameter . These performances are tested by some examples, and the results are in agreement with the theoretical analysis.

There are still some future works to be done: how to properly choose a parameter so that the preconditioned matrix has better properties? How to further precondition submatrix to improve our preconditioner?

Acknowledgments

The authors would express their great thankfulness to the referees and the editor Professor P. N. Shivakumar for their helpful suggestions for revising this paper. The authors would like to thank H. C. Elman, A. Ramage, and D. J. Silvester for their free IFISS software package. This research is supported by Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), Sichuan Province Sci. & Tech. Research Project (2012GZX0080), and the Fundamental Research Funds for the Central Universities.