Applications of Methods of Numerical Linear Algebra in EngineeringView this Special Issue
Spectral Properties of the Iteration Matrix of the HSS Method for Saddle Point Problem
We discuss spectral properties of the iteration matrix of the HSS method for saddle point problems and derive estimates for the region containing both the nonreal and real eigenvalues of the iteration matrix of the HSS method for saddle point problems.
Consider the following saddle point problem: with symmetric positive definite and with rank . Without loss of generality, we assume that the coefficient matrix of (1) is nonsingular so that (1) has a unique solution. Systems of the form (1) arise in a variety of scientific and engineering applications, such as linear elasticity, fluid dynamics, electromagnetics, and constrained quadratic programming. One can see  for more applications and numerical solution techniques of (1).
The HSS Method. Let be an arbitrary initial guess. For until the sequence of iterates converges, compute the next iterate according to the following procedure: where is a given positive constant.
By eliminating the intermediate vector , we obtain the following iteration in fixed point form as where and . Obviously, is the iteration matrix of the HSS iteration method.
In addition, if we introduce matrices then and . Therefore, one can readily verify that the HSS method is also induced by the matrix splitting .
The following theorem established in  describes the convergence property of the HSS method.
In fact, one can see  for a comprehensive survey on the HSS method. As is known, the iteration method (3) converges to the unique solution of the linear system (1) if and only if the spectral radius of the iteration matrix is less than 1. The spectral radius of the iteration matrix is decisive for the convergence and stability, and the smaller it is, the faster the iteration method converges when the spectral radius is less than 1. In this paper, we will discuss the spectral properties of the iteration matrix of the HSS method for saddle point problems and derive estimates for the region containing both the nonreal and real eigenvalues of the iteration matrix of the HSS method for saddle point problems.
Throughout the paper, denotes the transpose of a matrix and indicates its transposed conjugate. , are the smallest and largest eigenvalues of symmetric positive semidefinite , respectively. We denote by the decreasing ordered singular values of . and , respectively, denote the real part and imaginary part of .
2. Main Results
In fact, the iteration matrix can be written as Therefore, we are just thinking about the spectral properties of matrix . That is, we consider the following eigenvalue problem: where is any eigenpair of . From (7), we have Note that for all . From (8), we have Let Obviously, . Therefore, (9) can be written as That is, which is equal to It is easy to see that the two eigenproblems (7) and (13) have the same eigenvectors, while the eigenvalues are related by (10). Obviously, if the spectrum of can be obtained, then the spectrum of (7) can be also derived.
From [5, Lemma 2.1], we have the following result.
Lemma 2. Assume that is symmetric and positive definite and . For each eigenpair of (7), all the eigenvalues of the iteration matrix are , where satisfies the following. (1)If , then (2)If , then
From (14), it is easy to verify that , and if , then , or if , then .
In the sequel, we will present the main result, that is, Theorem 3.
Theorem 3. Under the hypotheses and notation of Lemma 2, all the eigenvalues of the iteration matrix are such that the following hold. (1)If , then (2)If , then
Proof. Let be an eigenvector with respect to . From (13), we get
By (19), we get . Substituting it into (18) yields
Multiplying (20) from the left by , we arrive at
Let . For symmetric matrix , the quadratic equation (21) has real coefficients so that its roots are given by
Eigenvalues with nonzero imaginary part arise if the discriminant is negative.
If , from (14) we have In this case, from (23) we have
If , then from (22). Combing (10) with (15), we have That is, Further, we have Therefore, So, we have That is, By the simple computations, we have Obviously, we also have That is to say,
3. Numerical Experiments
In this section, we consider the following two examples to illustrate the above result.
Example 1 (see [6–9]). Consider the following classic incompressible steady Stokes problem:
with suitable boundary condition on . That is to say, the boundary conditions are on the three fixed walls (, , and ) and , on the moving wall (). The test problem is a “leaky” two-dimensional lid-driven cavity problem in square (, ). Using the IFISS software  to discretize (34), the finite element subdivision is based on and uniform grids of square elements and the mixed finite element used is the bilinear-constant velocity pressure: pair with stabilization (the stabilization parameter is zero). The coefficient matrix generated by this package is singular because corresponding to the discrete divergence operator is rank deficient. The nonsingular matrix is obtained by dropping the first two rows of and the first two rows and columns of matrix . Note that matrix is a null matrix, which is the corresponding (2,2) block of (1). In this case, and correspond to , and and correspond to . For the Stokes problem, the (1, 1) block of the coefficient matrix corresponding to the discretization of the conservative term is symmetric positive definite.
By calculations, the values given in Tables 1 and 2 are obtained, which are to verify the results of Theorem 3. In Tables 1 and 2, , respectively, denote the lower and upper bounds of all the eigenvalues of . and .
From Tables 1 and 2, it is not difficult to find that the theoretical results are in line with the results of numerical experiments. Further, for , the average error in the lower bounds for 10 different values of is 0.00112 and the average error in the upper bounds for 10 different values of is 0.00047. For , the average error in the lower bounds for 10 different values of is 0.0005 and the average error in the upper bounds for 10 different values of is 0.000091. That is, Theorem 3 provides reasonably good bounds for the eigenvalue distribution of the iteration matrix of the HSS method when the iteration parameter is taken in different regions.
Example 2 (see ). The saddle point system is from the discretization of a groundwater flow problem using mixed-hybrid finite elements . In the example at hand, and . By calculations, here we have , , , and .
In this case there are nonreal eigenvalues (except for very small ). In Table 3 we list the upper bounds given in Theorem 3 when . From Table 3, it is not difficult to find that the theoretical results are in line with the results of numerical experiments. That is, Theorem 3 provides reasonably good bounds for the eigenvalue distribution of the iteration matrix with when the iteration parameter is taken in different regions.
Conflict of Interests
The authors declare that they have no conflict of interests regarding the publication of this paper.
The authors would like to thank the reviewers for providing helpful suggestions, which greatly improved the paper. This research was supported by NSFC (no. 11301009), Science and Technology Development Plan of Henan Province (no. 122300410316), and Natural Science Foundations of Henan Province (no. 13A110022).