Mathematical Problems in Engineering

Volume 2015, Article ID 618380, 9 pages

http://dx.doi.org/10.1155/2015/618380

## Block Preconditioners for Complex Symmetric Linear System with Two-by-Two Block Form

School of Mathematics and Statistics, Anyang Normal University, Anyang 455000, China

Received 31 May 2015; Accepted 4 August 2015

Academic Editor: Chih-Cheng Hung

Copyright © 2015 Shi-Liang Wu and Cui-Xia Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Based on the previous work by Zhang and Zheng (A parameterized splitting iteration method for complex symmetric linear systems, Japan J. Indust. Appl. Math., 31 (2014) 265–278), three block preconditioners for complex symmetric linear system with two-by-two block form are presented. Spectral properties of the preconditioned matrices are discussed in detail. It is shown that all the eigenvalues of the preconditioned matrices are well-clustered. Numerical experiments are reported to illustrate the efficiency of the proposed preconditioners.

#### 1. Introduction

Consider the following complex symmetric linear system:where , , and matrices are symmetric positive definite. Here denotes the imaginary unit.

Complex symmetric linear systems of this kind (1) are important and arise in a variety of scientific computing and engineering applications, such as diffuse optical tomography [1], quantum chemistry and eddy current problem [2, 3], structural dynamics [4–9], FFT-based solution of certain time-dependent PDEs [10], molecular dynamics and fluid dynamics [11], and lattice quantum chromodynamics [12]. One can see [13–16] for more examples and additional references.

To efficiently solve complex symmetric linear system (1), an efficient approach is that one can adopt some real arithmetics to solve one of the several equivalent real formulations and avoid solving the complex linear system. For example, the complex symmetric linear system (1) can be equivalently written asFor other forms of the real equivalent formulations of the complex symmetric linear system (1), one can see [11, 13, 17] for more details. The advantage of this form may be of two aspects: one is that it may be directly and efficiently solved in some real arithmetics by Krylov subspace methods (such as GMRES [18]), by alternating splitting iteration method (such as PMHSS [19–21]), and by C-to-R iteration methods [17, 20], and the other is that one can construct some preconditioning matrices to improve the speed of Krylov subspace methods for solving the block two-by-two linear system (2); that is to say, one can use some preconditioned Krylov subspace methods for solving the block two-by-two linear system (2). As for the latter, in [11], numerical experiments show that some Krylov subspace methods with standard ILU preconditioner for this formulation (2) can perform reasonably well. In [13], several types of block preconditioners have been discussed and argued that if either the real part or the symmetric part of the coefficient matrix is positive semidefinite, block preconditioners for real equivalent formulations may be a useful alternative to preconditioners for the original complex formulation. In [19], the PMHSS preconditioner has been presented and numerical experiments show that the PMHSS preconditioner is efficient and robust.

Recently, Zhang and Zheng [22] consider the application of the block triangular preconditioner:where is a given positive constant, together with a Krylov subspace iterative solver. It is obvious that is positive (real) stable; that is, its eigenvalues are all real and positive. In [22], it is shown that all the eigenvalues of the preconditioned matrix are clustered and the preconditioner performs better than the preconditioner PMHSS [19, 21] under certain conditions.

In this paper, based on the idea of Simoncini [23], we consider the following preconditioner:Obviously, the preconditioner is indefinite, which is different from the preconditioner . Theoretical analysis shows that all the eigenvalues of the preconditioned matrix are well-clustered. To illustrate the efficiency of the preconditioner , we also consider two block diagonal preconditioners belowOur numerical experiments show that the indefinite block triangular preconditioner is slightly more efficient than the positive stable block triangular preconditioner and both the block triangular preconditioners are more efficient than both the block diagonal preconditioners and . Obviously, when , the block triangular preconditioner reduces to the block diagonal preconditioner ; when , the block triangular preconditioner reduces to the block diagonal preconditioner .

The remainder of the paper is organized as follows. In Section 2, eigenvalue analysis for the preconditioned matrices , , and is described in detail. In Section 3, the results of numerical experiments are reported. Finally, in Section 4 we give some conclusions to end the paper.

#### 2. Eigenvalue Analysis

In general, the spectral properties of the preconditioned matrix give important insight into the convergence behavior of the preconditioned Krylov subspacemethods. In particular, for many linear systems arising in practice, a well-clustered spectrum usually results in rapid convergence of the preconditioned Krylov subspace methods, such as CG, MINRES, and GMRES [24]. Therefore, in this section, we will investigate the eigenvalue distribution of the preconditioned matrices , , and .

The spectral distribution of is described in the following theorem.

Theorem 1. *Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .*

*Proof. *Let be an eigenvalue of with eigenvector . ThenEquation (6) is equivalent toBased on (7), we getMultiplying (8) by matrix yieldsCombining (10) with (9), we haveMultiplying (11) by , note that and , and then we havewhich isRoots of (13) arewhich completes the proof.

*Remark 2. *From Theorem 1, it is not difficult to find that the spectral distribution of not only depends on the parameter , but also depends on the values of and . If , then the preconditioned matrix has precisely two eigenvalues: whereas this is an ideal state, which is difficult to achieve because and depend on the spectrum of matrices and . That is to say, is questionable in actual implementations. Even if possible, the cost of the computation of the Schur complement may be high because matrix involves the inverse of matrix .

In fact, based on the structure of the second eigenvalue, with and , we can choose the value of to make all the eigenvalues of the preconditioned matrix more clustered. A natural choice is , which is independent of the spectrum of matrices and . Obviously, in this case, the preconditioned matrix has precisely two eigenvalues: 1 and . This fact is precisely stated as the following corollary.

Corollary 3. *Let and be symmetric positive definite and let be the eigenvalue of the matrix . If , then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and (with algebraic multiplicity ).*

With respect to the spectral distribution of , the following theorem is provided in [22].

Theorem 4. *Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .*

*Remark 5. *Obviously, if we choose in Theorem 4 [22], then the preconditioned matrix has precisely one eigenvalue: 1, whereas this choice is questionable in actual implementations and its reason is similar to that of the preconditioner . If we choose , then the preconditioned matrix has precisely two eigenvalues: 1 and . Specifically, the following corollary is obtained.

Corollary 6. *Let and be symmetric positive definite and let be the eigenvalue of the matrix . If , then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and 2 (with algebraic multiplicity ).*

From Corollaries 3 and 6, it is easy to find that the preconditioned matrices and have precisely two distinct eigenvalues. The former is 1 and −2, and the latter is 1 and 2. So, in general, any Krylov subspace method with optimality and Galerkin property may terminate in at most two steps if roundoff errors are ignored. Based on the brief discussion, in our numerical computations, we investigate the spectral distribution of the preconditioned matrices and from two aspects: one is , and the other is . From the numerical results of the preconditioner (), the eigenvalue distributions of and with are slightly better than that of and with from the viewpoint of eigenvalue clustering (specifically, one can see the next section).

Concerning the spectral distribution of , we have the following theorem.

Theorem 7. *Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .*

* Proof. *Let be an eigenvalue of with eigenvector . Thenwhich is equivalent toBased on (16), we getMultiplying (17) by matrix yieldsCombining (18) with (19), we haveMultiplying (20) by , note that and , and then we havewhich is equal toBoth roots of (22) arewhich completes the proof.

Similarly, the spectral distribution of is described in the following theorem.

Theorem 8.

#### 3. Numerical Experiments

In this section, based on the above discussion, some numerical experiments are reported to demonstrate the numerical behavior of the , , , and preconditioning matrices by solving the block two-by-two linear systems (2) with the correspondingly preconditioned GMRES methods. To compare the above four preconditioners on the basis of the numbers of iteration counts (IT) and CPU times in seconds (CPU), the following examples have been considered. In our numerical experiments, all the computations are done with MATLAB 7.0.

*Example 1. *The complex symmetric linear system (1) is of the formwithwhere , , and and are the first and the last unit vectors in , respectively. Here and correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square with the mesh size . The right hand side vector is adjusted to be , with being the vector of all entries equal to 1 [5, 6].

*Example 2. *Consider the following complex symmetric linear system:where and are the inertia and stiffness matrices, and are the viscous and hysteretic damping matrices, respectively, and is the driving circular frequency. In our numerical computations, we take with being a damping coefficient, , , , and the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square with the mesh size . In this case, the matrix possesses the tensor-product form with . In addition, the right hand side vector is adjusted to be , with being the vector of all entries equal to 1. For more detail, one can refer to [4–6, 13].

Firstly, the spectral distribution of the preconditioned matrix has been provided because the spectral properties of the preconditioned matrix give important insight into the convergence behavior of the preconditioned Krylov subspace methods. To illustrate the above results in Section 2, there is a need to test the eigenvalue distributions of the preconditioned matrix , , , and . To this end, for convenience, all the matrices tested are unless otherwise mentioned; that is, the mesh is a grid. For the eigenvalue distributions of the preconditioned matrix and , from Corollaries 3 and 6, we choose . In this case, Figures 1 and 2 plot the eigenvalue distributions of the preconditioned matrix , , , and , where Figure 1 corresponds to Example 1 and Figure 2 corresponds to Example 2.