Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article
Special Issue

Applications of Methods of Numerical Linear Algebra in Engineering

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 894242 | 7 pages | https://doi.org/10.1155/2014/894242

A Double-Parameter GPMHSS Method for a Class of Complex Symmetric Linear Systems from Helmholtz Equation

Academic Editor: Masoud Hajarian
Received11 Nov 2013
Accepted05 Feb 2014
Published07 Apr 2014

Abstract

Based on the preconditioned MHSS (PMHSS) and generalized PMHSS (GPMHSS) methods, a double-parameter GPMHSS (DGPMHSS) method for solving a class of complex symmetric linear systems from Helmholtz equation is presented. A parameter region of the convergence for DGPMHSS method is provided. From practical point of view, we have analyzed and implemented inexact DGPMHSS (IDGPMHSS) iteration, which employs Krylov subspace methods as its inner processes. Numerical examples are reported to confirm the efficiency of the proposed methods.

1. Introduction

Let us consider the following form of the Helmholtz equation: where are real coefficient functions and satisfies Dirichlet boundary conditions in .

Using the finite difference method to discretize the Helmholtz equation (1) with both and strictly positive leads to the following system of linear equations: where and with real symmetric matrices satisfying under certain conditions. Throughout the paper, for , means that is symmetric negative definite and means that is symmetric negative semidefinite.

Systems such as (2) are important and arise in a variety of scientific and engineering applications, including structural dynamics [13], diffuse optical tomography [4], FFT-based solution of certain time-dependent PDEs [5], lattice quantum chromodynamics [6], molecular dynamics and fluid dynamics [7], quantum chemistry, and eddy current problem [8, 9]. One can see [10, 11] for more examples and additional references.

Based on the specific structure of the coefficient matrix , one can verify that the Hermitian and skew-Hermitian parts of the complex symmetric matrix , respectively, are Obviously, the above Hermitian and skew-Hermitian splitting (HSS) of the coefficient matrix is in line with the real and imaginary parts splitting of the coefficient matrix . Based on the HSS method [12], Bai et al. [2] skillfully designed the modified HSS (MHSS) method to solve the complex symmetric linear system (2). To generalize the concept of this method and accelerate its convergence rate, Bai et al. in [3, 13] designed the preconditioned MHSS (PMHSS) method. It is noted that MHSS and PMHSS methods can efficiently solve the linear system (2) with and .

Recently, Xu [14] proposed the following GPMHSS method for solving the complex symmetric linear systems (2) with and it is described in the following.

The GPMHSS Method. Let be an arbitrary initial guess. For until the sequence of iterates converges, compute the next iterate according to the following procedure: where is a given positive constant and is a prescribed symmetric positive definite matrix.

Theoretical analysis in [14] shows that the GPMHSS method converges unconditionally to the unique solution of the complex symmetric linear system (2). Numerical experiments are given to show the effectiveness of the GMHSS method.

In this paper, based on the asymmetric HSS and generalized preconditioned HSS methods in [15, 16], a natural generalization for the GMHSS iteration scheme is that we can design a double-parameter GMHSS (DGPMHSS) iteration scheme for solving the complex symmetric linear systems (2) with . That is to say, the DGPMHSS iterative scheme works as follows.

The DGPMHSS Method. Let be an arbitrary initial guess. For until the sequence of iterates converges, compute the next iterate according to the following procedure: where is a given nonnegative constant, is a given positive constant, and is a prescribed symmetric positive definite matrix.

Just like the GPMHSS method (4), both matrices and are symmetric positive definite. Hence, the two linear subsystems in (5) can also be effectively solved either exactly by a sparse Cholesky factorization or inexactly by a preconditioned conjugate gradient scheme. Theoretical analysis shows that the iterative sequence produced by DGPMHSS iteration method converges to the unique solution of the complex symmetric linear systems (2) for a loose restriction on the choices of and . The contraction factor of the DGPMHSS iteration can be bounded by a function, which is dependent only on the choices of and , the smallest eigenvalues of matrices and .

This paper is organized as follows. In Section 2, we study the convergence properties of the DGPMHSS method. In Section 3, we discuss the implementation of DGPMHSS method and the corresponding inexact DGPMHSS (IDPGMHSS) iteration method. Numerical examples are reported to confirm the efficiency of the proposed methods in Section 4. Finally, we end the paper with concluding remarks in Section 5.

2. Convergence Analysis for the DGPMHSS Method

In this section, the convergence of the DGPMHSS method is studied. The DGPMHSS iteration method can be generalized to the two-step splitting iteration framework. The following lemma is required to study the convergence rate of the DGPMHSS method.

Lemma 1 (see [13]). Let , be two splittings of , and let be a given initial vector. If is a two-step iteration sequence defined by , then Moreover, if the spectral radius , then the iterative sequence converges to the unique solution of system (1) for all initial vectors .

Applying this lemma to the DGPMHSS method, we get convergence property in the following theorem.

Theorem 2. Let , and , where is a prescribed nonsingular matrix. Let be defined in (2), a nonnegative constant, and a positive constant. Then the iteration matrix of GPMHSS method is which satisfies where and , respectively, are the spectral sets of the matrices and .
In addition, let and , respectively, be the smallest eigenvalues of the matrices and . If then the DGPMHSS iteration (5) converges to the unique solution of the linear system (2).

Proof. Let Obviously, and are nonsingular for any nonnegative constants or positive constants . So formula (8) is valid.
The iteration matrix is similar to That is, Therefore, we have Since and , respectively, are symmetric positive definite and symmetric positive semidefinite and is a nonsingular matrix, and are symmetric positive definite and symmetric positive semidefinite, respectively. Therefore, there exist orthogonal matrices such that where and with and being the eigenvalues of the matrices and , respectively.
Through simple calculations, we can get that which gives the upper bound for in (9). The next proof is similar to that of Theorem 1 in [17]. Here it is omitted.

The approach to minimize the upper bound is very important in theoretical viewpoint. However, it is not practical since the corresponding spectral radius of the iteration matrix in (9) is not optimal. How to choose the suitable preconditioners and parameters for practical problem is still a great challenge.

3. The IDGPMHSS Iteration

In the DGPMHSS method, it is required to solve two systems of linear equations whose coefficient matrices are and , respectively. However, this may be very costly and impractical in actual implementation. To overcome this disadvantage and improve the computational efficiency of the DGPMHSS iteration method, we propose to solve the two subproblems iteratively [12, 18], which leads to IDGPMHSS iteration scheme. Its convergence can be shown in a similar way to that of the IHSS iteration method, using Theorem 3.1 of [12]. Since and are symmetric positive definite, some Krylov subspace methods (such as CG) can be employed to gain its solution easily. Of course, if good preconditioners for matrices and are available, we can use the preconditioned conjugate gradient (PCG) method instead of CG for the two inner systems that yields a better performance of IDGPMHSS method. If either or (or both) is Toeplitz, we can use fast algorithms for solution of the corresponding subsystems [19].

4. Numerical Examples

In this section, we give some numerical examples to demonstrate the performance of the DGPMHSS and IDGPMHSS methods for solving the linear system (2). Numerical comparisons with the GPMHSS method are also presented to show the advantage of the DGPMHSS method.

In our implementation, the initial guess is chosen to be and the stopping criteria for outer iterations is The preconditioner used in GPMHSS method is chosen to be . For the sake of comparing, the corresponding preconditioner used in DGPMHSS method is chosen to be . Since the numerical results in [2, 3] show that the PMHSS iteration method outperforms the MHSS and HSS iteration methods when they are employed as preconditioners for the GMRES method or its restarted variants [20], we just examine the efficiency of DGPMHSS iteration method as a solver for solving complex symmetric linear system (2) by comparing the iteration numbers (denoted as IT) and CPU times (in seconds, denoted as CPU(s)) of DGPMHSS method with GPMHSS method.

Example 3 (see [5, 2123]). Consider the following form of the Helmholtz equation: where are real coefficient functions and satisfies Dirichlet boundary conditions in . The above equation describes the propagation of damped time harmonic waves. We take to be the five-point centered difference matrix approximating the negative Laplacian operator on a uniform mesh with mesh size . The matrix possesses the tensor-product form with . Hence, is an block-tridiagonal matrix, with . This leads to the complex symmetric linear system (2) of the form In addition, we set and the right-hand side vector to be , with being the vector of all entries equal to 1. As before, we normalize the system by multiplying both sides by .

As is known, the spectral radius of the iteration matrix may be decisive for the convergence of the iteration method. The spectral radius corresponding to the iteration method is necessary to consider. The comparisons of the spectral radius of the two different iteration matrices derived by GPMHSS and DGPMHSS methods with different mesh size are performed in Tables 1, 2, 3, and 4. In Tables 14, we used the optimal values of the parameters and , denoted by for GPMHSS method, and , for DGPMHSS method. These parameters are obtained experimentally with the least spectral radius for the iteration matrices of the two methods.


1050 80100

GPMHSS
1.11.52.22
0.5009 0.52220.56670.6274
DGPMHSS
1.11.52.22
10.90.80.8
0.50010.49860.50120.4901


1050 80100

GPMHSS
1.11.52.21.8
0.5010 0.52300.56970.6337
DGPMHSS
1.11.52.21.8
110.90.9
0.50040.50630.52350.5097


1050 80100

GPMHSS
1.11.52.21.6
0.5011 0.52340.57080.6360
DGPMHSS
1.11.52.21.8
1111
0.50050.50810.52890.5091


1050 80100

GPMHSS
1.11.52.21.7
0.5011 0.52350.57140.6372
DGPMHSS
1.11.52.21.8
1111
0.50050.50890.53100.5139

From Tables 14, one can see that with the mesh size creasing, the trend of the experimentally optimal parameter of the GPMHSS and DGPMHSS methods is relatively stable. In Tables 14, we observe that the optimal spectral radius of DGPMHSS method is still smaller than those of the GPMHSS method, which implies that the DGPMHSS method may outperform the GPMHSS method. To this end, we need to examine the efficiency of the GPMHSS and DGPMHSS methods for solving the systems of linear equations , where is described.

In Tables 5, 6, 7, and 8, we list the numbers of iteration steps and the computational times for GPMHSS and DGPMHSS iteration methods using the optimal parameters in Tables 14. In Tables 58, “RES” denotes the relative residual error.


1050 80100

GPMHSS
 RES
 CPU(s)0.015 0.0160.0160.016
 IT20212430
DGPMHSS
 RES
 CPU(s)0.0150.0150.0150.013
 IT20191817


1050 80100

GPMHSS
 RES
 CPU(s)0.031 0.0460.0460.063
 IT20212429
DGPMHSS
 RES
 CPU(s)0.0310.0320.0310.031
 IT20202019


1050 80100

GPMHSS
 RES
 CPU(s)0.094 0.0940.110.125
 IT20212429
DGPMHSS
 RES
 CPU(s)0.0780.0780.0940.094
 IT20202120


1050 80100

GPMHSS
 RES
 CPU(s)0.172 0.1720.2030.235
 IT20212528
DGPMHSS
 RES
 CPU(s)0.1560.1560.1870.172
 IT20202221

From Tables 58, we see that the DGPMHSS method is the best among the two methods in terms of number of iteration steps and computational time. For the GPMHSS and DGPMHSS methods, the CPU's time grows with the problem size whereas the presented results in Tables 58 show that in all cases the DGPMHSS method is superior to the GPMHSS method. That is to say, under certain conditions, compared with the GPMHSS method, the DGPMHSS method may be given the priority for solving the complex symmetric linear system with .

As already noted, in the two-half steps of the DGPMHSS iteration, there is a need to solve two systems of linear equations, whose coefficient matrices are and , respectively. This can be very costly and impractical in actual implementation. We use the IDGPMHSS method to solve the systems of linear equations (2) in the actual implementation. That is, it is necessary to solve two systems of linear equations with and by using the IDGPMHSS iteration. It is easy to know that and are symmetric and positive definite. So, solving the above two subsystems, the CG method can be employed.

In our computations, the inner CG iteration is terminated if the current residual of the inner iterations satisfies where and are, respectively, the residuals of the th inner CG for and . is the th outer IDGPMHSS iteration; is a tolerance.

Some results are listed in Tables 9 and 10 for and , which are the numbers of outer IDGPMHSS iteration (it.s), the average numbers (avg1) of inner CG iteration for and the average numbers (avg2) of CG iteration for .


it.savg1avg2it.savg1avg2it.savg1avg2

(1.1, 1) 10 51 20.918.26826.123.78430.4 28.3
(1.5, 1)5054 23.518.370 28.023.98633.028.4
(2.2, 1)805725.020.17230.025.38834.230.0
(1.8, 1)1005827.518.97431.924.38936.128.7


it.savg1avg2it.savg1avg2it.savg1avg2

(1.1, 1) 10 66 27.1 23.69034.231.111140.0 37.2
(1.5, 1)5070 30.6 23.7 93 37.531.3114 43.437.4
(2.2, 1)8075 35.024.29741.131.811847.037.8
(1.8, 1)100 77 36.125.39842.232.411948.038.3

In our numerical computations, it is easy to find the fact that the choice of is important to the convergence rate of the IDGPMHSS method. According to Tables 9 and 10, the iteration numbers of the IDGPMHSS method generally increase when decreases. Meanwhile, the iteration numbers of the IDGPMHSS method generally increase when increases.

5. Conclusion

In this paper, we have generalized the GPMHSS method to the DGPMHSS method for a class of complex symmetric linear systems with real symmetric matrices satisfying under certain conditions. Theoretical analysis shows that for any initial guess the DGPMHSS method converges to the unique solution of the linear system for a wide range of the parameters. Then, an inexact version has been presented and implemented for saving the computational cost. Numerical experiments show that DGPMHSS method and IDGPMHSS method are efficient and competitive.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous referees for their helpful suggestions, which greatly improved the paper. The authors would like to thank Yan-Jun Liang, Ting Wang, Ming-Yue Shao, and Man-Man Wang for helpful discussions. This work is also supported by the National College Students' Innovative and Entrepreneurial Training Program (201310479069). This research was supported by NSFC (no. 11301009), Science and Technology Development Plan of Henan Province (no. 122300410316), and Natural Science Foundations of Henan Province (no. 13A110022).

References

  1. A. Feriani, F. Perotti, and V. Simoncini, “Iterative system solvers for the frequency analysis of linear mechanical systems,” Computer Methods in Applied Mechanics and Engineering, vol. 190, pp. 1719–1739, 2000. View at: Google Scholar
  2. Z.-Z. Bai, M. Benzi, and F. Chen, “Modified HSS iteration methods for a class of complex symmetric linear systems,” Computing, vol. 87, no. 3-4, pp. 93–111, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  3. Z.-Z. Bai, M. Benzi, and F. Chen, “On preconditioned MHSS iteration methods for complex symmetric linear systems,” Numerical Algorithms, vol. 56, no. 2, pp. 297–317, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  4. S. R. Arridge, “Optical tomography in medical imaging,” Inverse Problems, vol. 15, no. 2, pp. R41–R93, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  5. D. Bertaccini, “Efficient preconditioning for sequences of parametric complex symmetric linear systems,” Electronic Transactions on Numerical Analysis, vol. 18, pp. 49–64, 2004. View at: Google Scholar | Zentralblatt MATH | MathSciNet
  6. A. Frommer, T. Lippert, B. Medeke, and K. Schilling, Numerical Challenges in Lattice Quantum Chromodynamics, vol. 15 of Lecture notes in computational science and engineering, Springer, Heidelberg, Germany, 2000.
  7. D. Day and M. A. Heroux, “Solving complex-valued linear systems via equivalent real formulations,” SIAM Journal on Scientific Computing, vol. 23, no. 2, pp. 480–498, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  8. Z.-Z. Bai, “Block alternating splitting implicit iteration methods for saddle-point problems from time-harmonic eddy current models,” Numerical Linear Algebra with Applications, vol. 19, no. 6, pp. 914–936, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  9. T. Sogabe and S.-L. Zhang, “A COCR method for solving complex symmetric linear systems,” Journal of Computational and Applied Mathematics, vol. 199, no. 2, pp. 297–303, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  10. M. Benzi and D. Bertaccini, “Block preconditioning of real-valued iterative algorithms for complex linear systems,” IMA Journal of Numerical Analysis, vol. 28, no. 3, pp. 598–618, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  11. Z.-Z. Bai, “Structured preconditioners for nonsingular matrices of block two-by-two structures,” Mathematics of Computation, vol. 75, no. 254, pp. 791–815, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. Z.-Z. Bai, G. H. Golub, and M. K. Ng, “Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 603–626, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  13. Z.-Z. Bai, M. Benzi, F. Chen, and Z.-Q. Wang, “Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems,” IMA Journal of Numerical Analysis, vol. 33, no. 1, pp. 343–369, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  14. W.-W. Xu, “A generalization of preconditioned MHSS iteration method for complex symmetric indefinite linear systems,” Applied Mathematics and Computation, vol. 219, no. 21, pp. 10510–10517, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  15. L. Li, T.-Z. Huang, and X.-P. Liu, “Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems,” Computers & Mathematics with Applications, vol. 54, no. 1, pp. 147–159, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  16. A.-L. Yang, J. An, and Y.-J. Wu, “A generalized preconditioned HSS method for non-Hermitian positive definite linear systems,” Applied Mathematics and Computation, vol. 216, no. 6, pp. 1715–1722, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  17. M. Dehghana, M. D. Madiseha, and M. Hajarian, “A GPMHSS method for a class of complex symmetric linear systems,” Mathematical Modelling and Analysis, vol. 18, no. 4, pp. 561–576, 2013. View at: Google Scholar
  18. Z.-Z. Bai, G. H. Golub, and M. K. Ng, “On inexact Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” Linear Algebra and its Applications, vol. 428, no. 2-3, pp. 413–440, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  19. R. H. Chan and M. K. Ng, “Conjugate gradient methods for Toeplitz systems,” SIAM Review, vol. 38, no. 3, pp. 427–482, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  20. Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, Mass, USA, 1996.
  21. R. W. Freund, “Conjugate gradient-type methods for linear systems with complex symmetric coefficient matrices,” SIAM Journal on Scientific and Statistical Computing, vol. 13, no. 1, pp. 425–448, 1992. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  22. X. Li, A. L. Yang, and Y. J. Wu, “Lopsided PMHSS iteration method for a class of complex symmetric linear systems,” Numerical Algorithms, 2013. View at: Publisher Site | Google Scholar
  23. S.-L. Wu, T.-Z. Huang, L. Li, and L.-L. Xiong, “Positive stable preconditioners for symmetric indefinite linear systems arising from Helmholtz equations,” Physics Letters A, vol. 373, no. 29, pp. 2401–2407, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet

Copyright © 2014 Cui-Xia Li and Shi-Liang Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

683 Views | 492 Downloads | 3 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.