Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 618380, 9 pages
http://dx.doi.org/10.1155/2015/618380
Research Article

Block Preconditioners for Complex Symmetric Linear System with Two-by-Two Block Form

School of Mathematics and Statistics, Anyang Normal University, Anyang 455000, China

Received 31 May 2015; Accepted 4 August 2015

Academic Editor: Chih-Cheng Hung

Copyright © 2015 Shi-Liang Wu and Cui-Xia Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Based on the previous work by Zhang and Zheng (A parameterized splitting iteration method for complex symmetric linear systems, Japan J. Indust. Appl. Math., 31 (2014) 265–278), three block preconditioners for complex symmetric linear system with two-by-two block form are presented. Spectral properties of the preconditioned matrices are discussed in detail. It is shown that all the eigenvalues of the preconditioned matrices are well-clustered. Numerical experiments are reported to illustrate the efficiency of the proposed preconditioners.

1. Introduction

Consider the following complex symmetric linear system:where , , and matrices are symmetric positive definite. Here denotes the imaginary unit.

Complex symmetric linear systems of this kind (1) are important and arise in a variety of scientific computing and engineering applications, such as diffuse optical tomography [1], quantum chemistry and eddy current problem [2, 3], structural dynamics [49], FFT-based solution of certain time-dependent PDEs [10], molecular dynamics and fluid dynamics [11], and lattice quantum chromodynamics [12]. One can see [1316] for more examples and additional references.

To efficiently solve complex symmetric linear system (1), an efficient approach is that one can adopt some real arithmetics to solve one of the several equivalent real formulations and avoid solving the complex linear system. For example, the complex symmetric linear system (1) can be equivalently written asFor other forms of the real equivalent formulations of the complex symmetric linear system (1), one can see [11, 13, 17] for more details. The advantage of this form may be of two aspects: one is that it may be directly and efficiently solved in some real arithmetics by Krylov subspace methods (such as GMRES [18]), by alternating splitting iteration method (such as PMHSS [1921]), and by C-to-R iteration methods [17, 20], and the other is that one can construct some preconditioning matrices to improve the speed of Krylov subspace methods for solving the block two-by-two linear system (2); that is to say, one can use some preconditioned Krylov subspace methods for solving the block two-by-two linear system (2). As for the latter, in [11], numerical experiments show that some Krylov subspace methods with standard ILU preconditioner for this formulation (2) can perform reasonably well. In [13], several types of block preconditioners have been discussed and argued that if either the real part or the symmetric part of the coefficient matrix is positive semidefinite, block preconditioners for real equivalent formulations may be a useful alternative to preconditioners for the original complex formulation. In [19], the PMHSS preconditioner has been presented and numerical experiments show that the PMHSS preconditioner is efficient and robust.

Recently, Zhang and Zheng [22] consider the application of the block triangular preconditioner:where is a given positive constant, together with a Krylov subspace iterative solver. It is obvious that is positive (real) stable; that is, its eigenvalues are all real and positive. In [22], it is shown that all the eigenvalues of the preconditioned matrix are clustered and the preconditioner performs better than the preconditioner PMHSS [19, 21] under certain conditions.

In this paper, based on the idea of Simoncini [23], we consider the following preconditioner:Obviously, the preconditioner is indefinite, which is different from the preconditioner . Theoretical analysis shows that all the eigenvalues of the preconditioned matrix are well-clustered. To illustrate the efficiency of the preconditioner , we also consider two block diagonal preconditioners belowOur numerical experiments show that the indefinite block triangular preconditioner is slightly more efficient than the positive stable block triangular preconditioner and both the block triangular preconditioners are more efficient than both the block diagonal preconditioners and . Obviously, when , the block triangular preconditioner reduces to the block diagonal preconditioner ; when , the block triangular preconditioner reduces to the block diagonal preconditioner .

The remainder of the paper is organized as follows. In Section 2, eigenvalue analysis for the preconditioned matrices , , and is described in detail. In Section 3, the results of numerical experiments are reported. Finally, in Section 4 we give some conclusions to end the paper.

2. Eigenvalue Analysis

In general, the spectral properties of the preconditioned matrix give important insight into the convergence behavior of the preconditioned Krylov subspacemethods. In particular, for many linear systems arising in practice, a well-clustered spectrum usually results in rapid convergence of the preconditioned Krylov subspace methods, such as CG, MINRES, and GMRES [24]. Therefore, in this section, we will investigate the eigenvalue distribution of the preconditioned matrices , , and .

The spectral distribution of is described in the following theorem.

Theorem 1. Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .

Proof. Let be an eigenvalue of with eigenvector . ThenEquation (6) is equivalent toBased on (7), we getMultiplying (8) by matrix yieldsCombining (10) with (9), we haveMultiplying (11) by , note that and , and then we havewhich isRoots of (13) arewhich completes the proof.

Remark 2. From Theorem 1, it is not difficult to find that the spectral distribution of not only depends on the parameter , but also depends on the values of and . If , then the preconditioned matrix has precisely two eigenvalues: whereas this is an ideal state, which is difficult to achieve because and depend on the spectrum of matrices and . That is to say, is questionable in actual implementations. Even if possible, the cost of the computation of the Schur complement may be high because matrix involves the inverse of matrix .
In fact, based on the structure of the second eigenvalue, with and , we can choose the value of to make all the eigenvalues of the preconditioned matrix more clustered. A natural choice is , which is independent of the spectrum of matrices and . Obviously, in this case, the preconditioned matrix has precisely two eigenvalues: 1 and . This fact is precisely stated as the following corollary.

Corollary 3. Let and be symmetric positive definite and let be the eigenvalue of the matrix . If , then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and (with algebraic multiplicity ).

With respect to the spectral distribution of , the following theorem is provided in [22].

Theorem 4. Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .

Remark 5. Obviously, if we choose in Theorem 4 [22], then the preconditioned matrix has precisely one eigenvalue: 1, whereas this choice is questionable in actual implementations and its reason is similar to that of the preconditioner . If we choose , then the preconditioned matrix has precisely two eigenvalues: 1 and . Specifically, the following corollary is obtained.

Corollary 6. Let and be symmetric positive definite and let be the eigenvalue of the matrix . If , then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and 2 (with algebraic multiplicity ).

From Corollaries 3 and 6, it is easy to find that the preconditioned matrices and have precisely two distinct eigenvalues. The former is 1 and −2, and the latter is 1 and 2. So, in general, any Krylov subspace method with optimality and Galerkin property may terminate in at most two steps if roundoff errors are ignored. Based on the brief discussion, in our numerical computations, we investigate the spectral distribution of the preconditioned matrices and from two aspects: one is , and the other is . From the numerical results of the preconditioner (), the eigenvalue distributions of and with are slightly better than that of and with from the viewpoint of eigenvalue clustering (specifically, one can see the next section).

Concerning the spectral distribution of , we have the following theorem.

Theorem 7. Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .

Proof. Let be an eigenvalue of with eigenvector . Thenwhich is equivalent toBased on (16), we getMultiplying (17) by matrix yieldsCombining (18) with (19), we haveMultiplying (20) by , note that and , and then we havewhich is equal toBoth roots of (22) arewhich completes the proof.

Similarly, the spectral distribution of is described in the following theorem.

Theorem 8. Let and be symmetric positive definite and let be the eigenvalue of the matrix . Then the eigenvalues of the preconditioned matrix are 1 (with algebraic multiplicity ) and with and , where is the corresponding eigenvector of the eigenvalue .

3. Numerical Experiments

In this section, based on the above discussion, some numerical experiments are reported to demonstrate the numerical behavior of the , , , and preconditioning matrices by solving the block two-by-two linear systems (2) with the correspondingly preconditioned GMRES methods. To compare the above four preconditioners on the basis of the numbers of iteration counts (IT) and CPU times in seconds (CPU), the following examples have been considered. In our numerical experiments, all the computations are done with MATLAB 7.0.

Example 1. The complex symmetric linear system (1) is of the formwithwhere , , and and are the first and the last unit vectors in , respectively. Here and correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square with the mesh size . The right hand side vector is adjusted to be , with being the vector of all entries equal to 1 [5, 6].

Example 2. Consider the following complex symmetric linear system:where and are the inertia and stiffness matrices, and are the viscous and hysteretic damping matrices, respectively, and is the driving circular frequency. In our numerical computations, we take with being a damping coefficient, , , , and the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square with the mesh size . In this case, the matrix possesses the tensor-product form with . In addition, the right hand side vector is adjusted to be , with being the vector of all entries equal to 1. For more detail, one can refer to [46, 13].

Firstly, the spectral distribution of the preconditioned matrix has been provided because the spectral properties of the preconditioned matrix give important insight into the convergence behavior of the preconditioned Krylov subspace methods. To illustrate the above results in Section 2, there is a need to test the eigenvalue distributions of the preconditioned matrix , , , and . To this end, for convenience, all the matrices tested are unless otherwise mentioned; that is, the mesh is a grid. For the eigenvalue distributions of the preconditioned matrix and , from Corollaries 3 and 6, we choose . In this case, Figures 1 and 2 plot the eigenvalue distributions of the preconditioned matrix , , , and , where Figure 1 corresponds to Example 1 and Figure 2 corresponds to Example 2.

Figure 1: Spectrum of , , , and for Example 1.
Figure 2: Spectrum of , , , and for Example 2.

Figures 1 and 2 show that the preconditioned matrices and have two clustering points: 1 and 2; the preconditioned matrices and have also two clustering points: 1 and −2. From the viewpoint of eigenvalue clustering, the eigenvalue distributions of and with are slightly better than that of and .

To investigate the effect on the spectral distribution of the preconditioned matrices and with the different value of , Figures 3 and 4 plot the eigenvalue distributions of the preconditioned matrices and from two aspects: and . For , we choose ; for , we choose . In this case, the eigenvalue distributions of the preconditioned matrices and are provided; see Figures 3 and 4.

Figure 3: Spectrum of and for Example 1.
Figure 4: Spectrum of and for Example 2.

Based on Figures 14, it is not difficult to find that the eigenvalue distributions of the preconditioned matrices and with are more clustered than that of the preconditioned matrices and with under certain conditions. This information may imply that the priority selection of parameter in the preconditioners and is . That is to say, for the preconditioners and may bring about good results when the preconditioners and are applied to solve block two-by-two linear systems (2) with the correspondingly preconditioned GMRES methods.

Secondly, we investigate the performance of the preconditioners , , , and . To this end, we use GMRES() precoditioned with the above four preconditioners to solve the system of linear equations (2). In the implementations, it is noted that the choice of the restart parameter of GMRES() is no general rule, which mostly depends on a matter of experience in practice. For the sake of simplicity, the value of the restart parameter is 20. Based on the aforementioned discussion, we choose for the preconditioners and . The purpose of these experiments is just to investigate the influence of the , , , and preconditioner on the convergence behaviors of GMRES(20). All tests are started from the zero vector, and the GMRES(20) method terminates if the relative residual error satisfies

In Tables 1 and 2, we present some iteration results to illustrate the convergence behavior of the , , , and preconditioner. “-GMRES()” denotes the -preconditioned GMRES() method. “-GMRES()” denotes the -preconditioned GMRES() method. “-GMRES()” denotes the -preconditioned GMRES() method and “-GMRES()” denotes the -preconditioned GMRES() method.

Table 1: IT and CPU for Example 1.
Table 2: IT and CPU for Example 2.

From Tables 1 and 2, we observe that these four preconditioners for the system of linear equations (2) are feasible and competitive; in particular, the iteration counts of and are hardly sensitive to change in the mesh sizes. More specifically, the numbers of iteration counts of the block triangular preconditioner are the same as that of the block triangular preconditioner . From CPU times, the block triangular preconditioner is less than the block triangular preconditioner . That is to say, in terms of iteration counts and CPU times, the block triangular preconditioner is slightly more efficient than the positive stable block triangular preconditioner . Both the block triangular preconditioners and are more efficient than both the block diagonal preconditioners and .

To investigate the efficiency of the block triangular preconditioners and with , here we consider two aspects and . For , we choose ; for , we choose . In this case, Tables 3 and 4 list some iteration results to illustrate the convergence behavior of the block triangular preconditioners and .

Table 3: IT and CPU for Example 1 with and .
Table 4: IT and CPU for Example 2 with and .

From Tables 3 and 4, when or , the block triangular preconditioners and for the system of linear equations (2) are feasible as well. In terms of iteration counts and CPU times, () for the block triangular preconditioners and is less efficient than for the block triangular preconditioners and . By a lot of numerical experiments, we find that the efficiency of may be more efficient than that of when the block triangular preconditioners and are applied to solve the system of linear equations (2). A lot of numerical results show that the optimal parameter of the block triangular preconditioners and is .

4. Conclusion

In this paper, we have analyzed the spectral properties as well as the computational performance of several types of block preconditioners for complex symmetric linear system with two-by-two block form. In contrast to the triangular preconditioner discussed in [22], we have given the spectral properties of the preconditioned matrices , , and . Theoretical analysis shows that all the eigenvalues of the preconditioned matrices , , and are well-clustered. For the block triangular preconditioners and , the parameter may be top priority. Our numerical experiments indicate that the block triangular preconditioners and are more effective in comparison to the symmetric indefinite block diagonal preconditioner and the symmetric positive definite block diagonal preconditioner under certain conditions.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research was supported by NSFC (no. 11301009) and Natural Science Foundations of Henan Province (no. 15A110007).

References

  1. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Problems, vol. 15, no. 2, pp. R41–R93, 1999. View at Publisher · View at Google Scholar
  2. Z.-Z. Bai, “Block alternating splitting implicit iteration methods for saddle-point problems from time-harmonic eddy current models,” Numerical Linear Algebra with Applications, vol. 19, no. 6, pp. 914–936, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. T. Sogabe and S.-L. Zhang, “A COCR method for solving complex symmetric linear systems,” Journal of Computational and Applied Mathematics, vol. 199, no. 2, pp. 297–303, 2007. View at Publisher · View at Google Scholar · View at Scopus
  4. A. Feriani, F. Perotti, and V. Simoncini, “Iterative system solvers for the frequency analysis of linear mechanical systems,” Computer Methods in Applied Mechanics and Engineering, vol. 190, no. 13-14, pp. 1719–1739, 2000. View at Publisher · View at Google Scholar · View at Scopus
  5. Z.-Z. Bai, M. Benzi, and F. Chen, “Modified HSS iteration methods for a class of complex symmetric linear systems,” Computing, vol. 87, no. 3-4, pp. 93–111, 2010. View at Publisher · View at Google Scholar · View at Scopus
  6. Z.-Z. Bai, M. Benzi, and F. Chen, “On preconditioned MHSS iteration methods for complex symmetric linear systems,” Numerical Algorithms, vol. 56, no. 2, pp. 297–317, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  7. S.-L. Wu and C.-X. Li, “A splitting iterative method for the discrete dynamic linear systems,” Journal of Computational and Applied Mathematics, vol. 267, pp. 49–60, 2014. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  8. S.-L. Wu, “Several variants of the Hermitian and skew-Hermitian splitting method for a class of complex symmetric linear systems,” Numerical Linear Algebra with Applications, vol. 22, no. 2, pp. 338–356, 2015. View at Publisher · View at Google Scholar · View at Scopus
  9. S.-L. Wu and C.-X. Li, “On semi-convergence of modified HSS method for a class of complex singular linear systems,” Applied Mathematics Letters, vol. 38, pp. 57–60, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. D. Bertaccini, “Efficient solvers for sequences of complex symmetric linear systems,” Electronic Transactions on Numerical Analysis, vol. 18, pp. 49–64, 2004. View at Google Scholar
  11. D. D. Day and M. A. Heroux, “Solving complex-valued linear systems via equivalent real formulations,” SIAM Journal on Scientific Computing, vol. 23, no. 2, pp. 480–498, 2002. View at Publisher · View at Google Scholar · View at Scopus
  12. A. Frommer, T. Lippert, B. Medeke, and K. Schilling, Numerical Challenges in Lattice Quantum Chromodynamics, vol. 15 of Lecture Notes in Computational Science and Engineering, Springer, Heidelberg, Germany, 2000. View at Publisher · View at Google Scholar
  13. M. Benzi and D. Bertaccini, “Block preconditioning of real-valued iterative algorithms for complex linear systems,” IMA Journal of Numerical Analysis, vol. 28, no. 3, pp. 598–618, 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. Z.-Z. Bai, “Structured preconditioners for nonsingular matrices of block two-by-two structures,” Mathematics of Computation, vol. 75, no. 254, pp. 791–815, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  15. X.-M. Gu, M. Clemens, T.-Z. Huang, and L. Li, “The SCBiCG class of algorithms for complex symmetric linear systems with applications in several electromagnetic model problems,” Computer Physics Communications, vol. 191, pp. 52–64, 2015. View at Publisher · View at Google Scholar
  16. X.-M. Gu, T.-Z. Huang, L. Li, H.-B. Li, T. Sogabe, and M. Clemens, “Quasi-minimal residual variants of the COCG and COCR methods for complex symmetric linear systems in electromagnetic simulations,” IEEE Transactions on Microwave Theory and Techniques, vol. 62, no. 12, pp. 2859–2867, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. O. Axelsson and A. Kucherov, “Real valued iterative methods for solving complex symmetric linear systems,” Numerical Linear Algebra with Applications, vol. 7, no. 4, pp. 197–218, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  18. Y. Saad and M. H. Schultz, “GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM Journal on Scientific and Statistical Computing, vol. 7, no. 3, pp. 856–869, 1986. View at Publisher · View at Google Scholar
  19. Z.-Z. Bai, M. Benzi, F. Chen, and Z.-Q. Wang, “Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems,” IMA Journal of Numerical Analysis, vol. 33, no. 1, pp. 343–369, 2013. View at Publisher · View at Google Scholar · View at Scopus
  20. O. Axelsson, M. Neytcheva, and B. Ahmad, “A comparison of iterative methods to solve complex valued linear algebraic systems,” Numerical Algorithms, vol. 66, no. 4, pp. 811–841, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. Z.-Z. Bai, “On preconditioned iteration methods for complex linear systems,” Journal of Engineering Mathematics, 2014. View at Publisher · View at Google Scholar
  22. G.-F. Zhang and Z. Zheng, “A parameterized splitting iteration method for complex symmetric linear systems,” Japan Journal of Industrial and Applied Mathematics, vol. 31, no. 2, pp. 265–278, 2014. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  23. V. Simoncini, “Block triangular preconditioners for symmetric saddle-point problems,” Applied Numerical Mathematics, vol. 49, no. 1, pp. 63–80, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  24. Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, Mass, USA, 1996.