Abstract

We establish two types of block triangular preconditioners applied to the linear saddle point problems with the singular (1,1) block. These preconditioners are based on the results presented in the paper of Rees and Greif (2007). We study the spectral characteristics of the preconditioners and show that all eigenvalues of the preconditioned matrices are strongly clustered. The choice of the parameter is involved. Furthermore, we give the optimal parameter in practical. Finally, numerical experiments are also reported for illustrating the efficiency of the presented preconditioners.

1. Introduction

Consider the following saddle point linear system:

where is a symmetric and positive semidefinite matrix with nullity (= dim(kernel())) , the matrix has full row rank, vectors and vectors , and vectors are unknown. The assumption that is nonsingular implies that , which we use in the following analysis. Under these assumptions, the system (1.1) has a unique solution. This system is very important and appears in many different applications of scientific computing, such as constrained optimization [1, 2], the finite element method for solving the Navier-Stokes equation [36], fluid dynamics, constrained least problems and generalized least squares problems [710], and the discretized time-harmonic Maxwell equations in mixed form [11].

Recently, T. Rees and C. Greif explored a preconditioning technique applied to the problem of solving linear systems arising from primal-dual interior point algorithms and quadratic programming in [12]. The preconditioner has the attractive property of improved eigenvalue clustering with increasing ill-conditioned block of the symmetric saddle point systems. To solve the saddle point system (1.1), Krylov subspace methods are usually used in modern solution techniques which rely on the ease of sparse matrix-vector products and converges at a rate dependent on the number of distinct eigenvalues of the preconditioned matrix [13, 14].

The rest of this paper, two types of block triangular preconditioners are established for the saddle point systems with an ill-conditioned (1,1) block. Our methodology extends the recent work done by Greif and Schötzau [11, 15], and Rees and Greif [12].

This paper is organized as follows. In Section 2, we will establish new precondtioners and study the spectral analysis of the new preconditioners for the saddle point system. Some numerical examples are given in Section 3. Finally, conclusions are made in Section 4.

2. Preconditioners and Spectrum Analysis

For linear systems, the convergence of an applicable iterative method is determined by the distribution of the eigenvalues of the coefficient matrix. In particular, it is desirable that the number of distinct eigenvalues, or at least the number of clusters, is small, because in this case convergence will be rapid. To be more precise, if there are only a few distinct eigenvalues, then optimal methods like BiCGStab or GMRES will terminate (in exact arithmetic) after a small and precisely defined number of steps.

Rees and Greif [12] established the following preconditioner for the symmetric saddle point system (1.1):

where is a scalar and is an symmetric positive weight matrix. Similar to , we introduce the following precondtioners for solving symmetric saddle point systems:

where is a parameter, and

where .

Theorem 2.1. The matrix has two distinct eigenvalues which are given by with algebraic multiplicity and , respectively. The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem: Let be a basis of the null space of , let be a basis of the null space of , and be a set of linearly independent vectors that complete to a basis of . Then the vectors , the vectors , and the vectors , are linearly independent eigenvectors associated with , and the vectors are linearly independent eigenvectors associated with .

Proof. Suppose that is an eigenvalue of , whose eigenvector is . So, we have Furthermore, it satisfies the generalized eigenvalue problem The second block row gives , substituting which into the first block row equation gives By inspection it is straightforward to see that any vector satisfies (2.9) with ; thus the latter is an eigenvalue of and is an eigenvector of . We obtain that the eigenvalue has algebraic multiplicity . From the nullity of it follows that there are linearly independent null vectors of . For each such null vector we can obtain each with algebraic multiplicity and is an eigenvalue of .
Let the vectors be a basis of the null space of , and let be a basis of the null space of . Because , the vectors and are linearly independent and together span the subspace . Let the vectors complete to a basis of . It follows that the vectors , the vectors , and the vectors , are linearly independent eigenvectors associated with , and the vectors are linearly independent eigenvectors associated with .
Next, we consider the remaining eigenvalues. Suppose and . From (2.9) we obtain
where , which implies that .

When the parameter , we easily obtain the following corollary from Theorem 2.1.

Corollary 2.2. Let . Then the matrix has one eigenvalue which is given by with algebraic multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Theorem 2.3. The matrix has two distinct eigenvalues which are given by with algebraic multiplicity and , respectively. The remaining eigenvalues lie in the interval

Proof. According to Theorem 2.1, we know that the matrix has two distinct eigenvalues which are given by with algebraic multiplicity and , respectively.
From (2.9), we can obtain that the remaining eigenvalues satisfy
where , in which is the standard Euclidean inner product, and . Evidently, we have . The expression (2.17) gives an explicit formula in terms of the generalized eigenvalues of (2.17) and can be used to identify the intervals in which the eigenvalues lie. Furthermore, we can obtain that the remaining eigenvalues lie in the interval or .

When the parameter , we easily obtain the following corollary from Theorem 2.3.

Corollary 2.4. Let . Then the matrix has one eigenvalue which is given by with algebraic multiplicity . The remaining eigenvalues lie in the interval .

Theorem 2.5. The matrix has two distinct eigenvalues which are given by with algebraic multiplicity and , respectively. The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem: Let be a basis of the null space of , and let be a basis of the null space of , and let be a set of linearly independent vectors that complete to a basis of . Then the vectors , the vectors , and the vectors , are linearly independent eigenvectors associated with , and the vectors are linearly independent eigenvectors associated with .

Proof. The proof is similar to the proof of Theorem 2.1.

When the parameter , we easily obtain the following corollary from Theorem 2.5.

Corollary 2.6. Let . Then the matrix has one eigenvalue which is given by with algebraic multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Theorem 2.7. The matrix has two distinct eigenvalues which are given by with algebraic multiplicity and , respectively. The remaining eigenvalues lie in the interval

Proof. The proof is similar to the proof of Theorem 2.3.

When the parameter , we easily obtain the following corollary from Theorem 2.7.

Corollary 2.8. Let . Then the matrix has only one eigenvalue which is given by with algebraic multiplicity . The remaining eigenvalues lie in the interval .

Remark 2.9. The above theorems and corollaries illustrate the strong spectral clustering when the (1, 1) block of is singular. A well-known difficulty is the increasing ill-conditioned (1, 1) block as the solution is approached. Our claim is that the preconditioners perform robust even as the problem becomes more ill-conditioned; in fact the outer iteration count decreases. On the other hand, solving the augmented (1,1) block may be more computationally difficult and requires effective approaches such as inexact solvers. In Section 3, we indeed consider inexact solvers in numerical experiments.

Remark 2.10. It is clearly seen from Theorems 2.1 and 2.5 and Corollaries 2.2 and 2.6 that our preconditioners are suitable for symmetric saddle point systems, from Theorems 2.3 and 2.7 and Corollaries 2.4 and 2.8 that our preconditioners are most effective than the preconditioner of [12].

Remark 2.11. Similarly, the nonsymmetric saddle point linear systems can also obtain the above results.

3. Numerical Experiments

All the numerical experiments were performed with MATLAB 7.0. The machine we have used is a PC-Intel(R), Core(TM)2 CPU T7200 2.0 GHz, 1024 M of RAM. The stopping criterion is , where is the residual vector after th iteration. The right-hand side vectors and are taken such that the exact solutions and are both vectors with all components being 1. The initial guess is chosen to be zero vector. We will use preconditioned GMRES(10) to solve the saddle point linear systems.

Our numerical experiments are similar to those in [16]. We consider the matrices taken from [17] with notations slightly changed.

We construct the saddle point-type matrix from reforming a matrix of the following form:

where is positive real. The matrix arises from the discretization by the maker and cell finite difference scheme [17] of a leaky two-dimensional lid-driven cavity problem in a square domain . Then the matrix is replaced by a random matrix with the same sparsity as , replaced by , such that is nonsingular. Denote , then we have with and . Obviously, the resulted saddle point-type matrix

satisfies .

From the matrix in (3.2) we construct the following saddle point-type matrix:

where is constructed from by making its first rows and columns with zero entries. Note that is semipositive definite and its nullity is .

In our numerical experiments the matrix in the augmentation block preconditioners is taken as . During implementation of our augmentation block preconditioners, we need the operation for a given vector or, equivalently, need to solve the following equation: for which we use an incomplete LU factorization of with drop tolerance . Here means number of outer (inner) iterations. Time() represents the corresponding computing time (in seconds) when taking the parameter as .

In the following, we summarize the observations from Tables 1, 2, 3, 4, 5, 6, and 7 and Figures 1, 2, and 3.

(i)From Tables 24, we can find that our preconditioners are more efficient than those of [12] both in number of iterations and time of iterations, especially in the case of the optimal parameter.(ii)Number and time of iterations with the preconditioner smaller than those with the preconditioners and . In fact, is a diagonal preconditioner.(iii)Number and time of iterations with the preconditioner are the smallest when .(iv)Number of iterations decreases but the computational cost of incomplete LU factorization increases with decreased . Therefore, we should not use the preconditioners with small in practical.(v)The eigenvalues of are strongly clustered. Furthermore, the eigenvalues of are positive.

4. Conclusion

We have proposed two types of block triangular preconditioners applied to the linear saddle point problems with the singular (1,1) block. The preconditioners have the attractive property of improved eigenvalues clustering with increasing ill-conditioned (1,1) block. The choice of the parameter is involved. Furthermore, according to Corollaries 2.2, 2.4, 2.6, and 2.8, we give the optimal parameter in practice. Numerical experiments are also reported for illustrating the efficiency of the presented preconditioners.

In fact, our methodology can extend the unsymmetrical case; that is, the (1,2) block and the (2,1) block of the saddle point linear system are unsymmetrical.

Acknowledgments

Warm thanks are due to the anonymous referees and editor Professor Victoria Vampa who made much useful and detailed suggestions that helped the authors to correct some minor errors and improve the quality of the paper. This research was supported by 973 Program (2008CB317110), NSFC (60973015, 10771030), the Chinese Universities Specialized Research Fund for the Doctoral Program of Higher Education (20070614001), and the Project of National Defense Key Lab. (9140C6902030906).