Abstract

Based on the modified Hermitian and skew-Hermitian splitting (MHSS) and preconditioned MHSS (PMHSS) methods, a generalized preconditioned MHSS (GPMHSS) method for a class of complex symmetric linear systems is presented. Theoretical analysis gives an upper bound for the spectral radius of the iteration matrix. From a practical point of view, we have analyzed and implemented inexact GPMHSS (IGPMHSS) iteration, which employs Krylov subspace methods as its inner processes. Numerical experiments are reported to confirm the efficiency of the proposed methods.

1. Introduction

Consider an iterative solution of the system of linear equations as follows: where is a complex symmetric matrix of the following form: with being symmetric positive definite and being symmetric positive semidefinite. Here and in the sequel, we use as the imaginary unit. One can readily verify that is non-Hermitian, that is to say, the linear system (1) is a non-Hermitian linear system. System such as (1) is important and arises in a variety of scientific and engineering applications, including structural dynamics [1–4], diffuse optical tomography [5, 6], FFT-based solution of certain time-dependent PDEs [7], lattice quantum chromodynamics [8], molecular dynamics and fluid dynamics [9], quantum chemistry, and eddy current problem [10, 11]. One can see [12, 13] for more examples and additional references. In order to solve (1) more effectively, many efficient numerical algorithms have been proposed in the literature, see [14–20].

Based on the specific structure of the coefficient matrix , one can verify that the Hermitian and skew-Hermitian parts of the coefficient matrix , respectively, are Obviously, the above Hermitian and skew-Hermitian splitting of the coefficient matrix is in line with the real and imaginary parts splitting of the coefficient matrix . Based on the HSS method [21], Bai et al. [2] skillfully designed a modified HSS (MHSS) method to solve the complex symmetric linear system (1) which is described below.

The MHSS Method. Let be an arbitrary initial guess. For , until the sequence of iterates converges, compute the next iterate according to the following procedure: where is a given positive constant and is an identity matrix.

The potential advantage of the MHSS method over the HSS method [21] for solving the complex symmetric linear system (1) is that only two linear subsystems with coefficient matrices and , both being real and symmetric positive definite, need to be solved at each step. Therefore, in this case, these two linear subsystems can be solved efficiently using mostly real arithmetic either exactly by a sparse Cholesky factorization or inexactly by conjugated gradient scheme. That is to say, the MHSS method successfully avoids solving a shifted skew-Hermitian linear subsystem with coefficient matrix .

Theoretical analysis in [2] shows that the MHSS method converges unconditionally to the unique solution of the complex symmetric linear system (1) when is real symmetric positive definite and is real symmetric positive semidefinite. The corresponding optimum parameter is obtained to minimize an upper bound on the spectral radius of the iteration matrix associated with (4).

The MHSS method immediately attracted considerable attention and resulted in many papers devoted to various aspects of the new algorithms. For instance, preconditioned modified Hermitian and skew-Hermitian splitting (PMHSS) iteration in [3], lopsided preconditioned modified Hermitian and skew-Hermitian splitting (LPMHSS) iteration in [22], amongst others. On the other hand, the MHSS method was successfully extended to the solution of control problems in [23].

In this paper, based on the splitting (3), we generalize the MHSS iterative scheme into a new approach, called generalized preconditioned MHSS (GPMHSS) iteration. By introducing two symmetric positive definite matrices and , the GPMHSS iterative scheme works as follows.

The GPMHSS Method. Let be an arbitrary initial guess. For , until the sequence of iterates converges, compute the next iterate according to the following procedure: where is a nonnegative constant and is a positive constant.

Note that the GPMHSS iteration method can cover many existing variants of the standard MHSS iteration. For instance, when and , the GPMHSS iteration method is equivalent to the standard MHSS iteration method in [2]; when and , the GPMHSS iteration method is equivalent to the standard PMHSS iteration method in [3]; when and , it leads to the LPMHSS iteration method in [22].

Theoretical analysis gives an upper bound about the contraction factor of GPMHSS iteration method, which shows the relations among GPMHSS, MHSS, and other existing variants. From a practical point of view, we also discuss the inexact variants of the GPMHSS iteration method and their implementation. A number of numerical experiments are presented to illustrate the advantages of the GPMHSS and IGPMHSS methods.

This paper is organized as follows. In Section 2, we study the convergence properties of the GPMHSS iteration method. In Section 3, we discuss the implementation of GPMHSS iteration method and the corresponding inexact GPMHSS (IPGMHSS) iteration method. Numerical experiments are reported to confirm the efficiency of the proposed methods in Section 4. Finally, we end the paper with concluding remarks in Section 5.

2. Convergence Analysis for the GPMHSS Method

In this section, the convergence of the GPMHSS method is studied and an upper bound for the contraction factor of the GPMHSS method is derived. The GPMHSS iteration method can be generalized into a two-step splitting iteration framework. The following lemma is required to study the convergence properties of the GPMHSS method.

The spectral radius of the matrix is the nonnegative real number , where denotes the spectrum of matrix . In fact, we have the basic property on spectral radius of the product of two matrices that , which is used in the proof of the following theorem.

Lemma 1 (see [21]). Let , be two splittings of , and be a given initial vector. If is a two-step iteration sequence defined by , then Moreover, if the spectral radius , then the iterative sequence converges to the unique solution of the system (1) for all initial vectors .

Applying this lemma to the GPMHSS method, we get the convergence property in the following theorem.

Theorem 2. Let and be two symmetric positive definite matrices. Let , with and symmetric positive definite and symmetric positive semidefinite, respectively, and let be a nonnegative constant and let be a positive constant. Then the iteration matrix of GPMHSS method is Denote , and to be the spectral sets of the matrices , , and , respectively, where , , and . Then, where denotes the spectral condition number of the matrix , and

Proof. Let , , , and . Obviously, and are nonsingular for any nonnegative constants and positive constants . So formula (8) is valid.
Let , , and . Then, Hence, Further, we have Through simple calculations, we can get that The above inequalities give the upper bound for in (9).

Some remarks on Theorem 2 are given below.

(i) When , GPMHSS reduces to a two-parameter PMHSS method. In this case, and . In the meantime, the upper bound in (9) results in which not only gives the upper bound for the PMHSS method and complements the theoretical results of paper [24], but also includes the special case in [2] as follows: when .

(ii) Based on (9), it is obvious that the convergence rate not only depends on the choices of two parameters and , but also depends on the choices of two auxiliary matrices and . Obviously, the efficiency of GPMHSS is best when one can obtain two iteration parameters and auxiliary matrices to minimize an upper bound on the spectral radius of the iteration matrix. That is to say, it is necessary to discuss the choices of , , , and . In fact, under certain conditions, in theory, some optimal iteration parameters and can be derived. For example, if , then the GPMHSS method reduces to the LPMHSS method and the corresponding optimal parameter is , where and , respectively, denote the smallest eigenvalue of and the largest eigenvalue of [22]. If and , then the GPMHSS method reduces to the PMHSS method and the corresponding optimal parameter is , where and , respectively, denote the smallest and largest eigenvalues of [3]. Whereas, if and , in theory, one cannot find the optimal values of the iteration parameters and in general. With respect to this point, one can see [24] for more details. Based on this fact, in theory, a conclusion may be obtained, that is, one cannot find the optimal values of the iteration parameters and for GPMHSS in general. But even so, the optimal values of the iteration parameters and for GPMHSS may be obtained experimentally (to see the fourth section). In practice, to further improve the efficiency of the GPMHSS method, it is desirable to determine or find a good estimate of the optimal parameter that minimizes the convergence factor. In fact, to find the actual optimal estimates and for the GPMHSS method is a hard task because its solution strongly depends on the particular structures and properties of the coefficient matrix , as well as the splittings matrices and , and needs further in-depth study, both from theory and computation point of view. When the optimal parameter cannot be derived in theory, the value of parameter with is selected by the statement on the choice of the iteration parameter in [25], that is to say, experience suggests that in most applications and for an appropriate scaling of the problem, a β€œsmall” value of parameter (usually between 0.01 and 0.5) may give good results. Whereas choosing a parameter so as to minimize the spectral radius of the iteration matrix is not necessarily the best choice when the algorithm is used as a preconditioner for a Krylov subspace method. Remarkably, it can be shown that, for certain problems, the alternating iteration results in an -independent preconditioner for GMRES when parameter is chosen sufficiently small, correspond to a spectral radius very close to 1. In fact, if we define the optimal value of parameter as the one that minimizes the total amount of work needed to compute an approximate solution, this will not necessarily be the same as the parameter that minimizes the number of (outer) iterations. Overall, the analytic determination of such an optimal value for parameter appears to be daunting. With respect to the choices of two auxiliary matrices and , and may have a better degree of freedom. For instance, if is positive definite, then and with would be a natural choice for the preconditioners.

(iii) When and , the upper bound in (9) results in

The approach to minimize the upper bound is very important in theoretical viewpoint. However, it is not practical since the corresponding spectral radius of the iteration matrix is not optimal. How to choose the suitable preconditioners and parameters for practical problem is still a great challenge.

3. The IGPMHSS Iteration

In the GPMHSS method, it is required to solve two systems of linear equations whose coefficient matrices are and , respectively. However, this may be very costly and impractical in actual implementations. To overcome this disadvantage and improve the computational efficiency of the GPMHSS iteration method, we propose to solve the two subproblems iteratively [21, 26], which leads to the inexact GPMHSS (IGPMHSS) iterative scheme. Its convergence can be shown in a similar way to that of the IHSS iteration method, using Theorem 3.1 of [21]. Since and are symmetric positive definite, there can employ some Krylov subspace methods (such as CG) to gain its solution easily. Of course, if good preconditioners for matrices and are available, we can use the preconditioned conjugate gradient (PCG) method instead of CG for the two inner systems, this yields a better performance of IGPMHSS method. If either or (or both) is Toeplitz, we can use fast algorithms for solution of the corresponding subsystems [27]. Here, just like the IHSS iteration, the IGPMHSS iterative scheme is presented in Algorithm 1.

;
while (not convergent)
  ;
 approximately solve by employing CG method,
 such that the residual of the iteration satisfies
  ;
  ;
  ;
 approximately solve by employing CG method,
 such that the residual of the
 iteration satisfies ;
  ;
  ;
end

Remark 3. It is not difficult to find the fact that if the inner systems can be solved exactly with and , then the IGPMHSS iteration reduces to the GPMHSS iteration. In fact, to guarantee the convergence of the IGPMHSS iteration, it is not necessary for and to go to zero when increases.
To derive the convergence properties for IGPMHSS iteration, the following lemma is required, which is presented by Bai et al. [21]. Here for any , which immediately induces the matrix norm for any .

Lemma 4. Let , be two splittings of . If is an iteration sequence defined as follows: satisfying , where , and satisfying , where , then is of the form Moreover, if is the exact solution of the system of linear equations (1), then we have where In particular, if then the iteration sequence converges to , where and .

Applying Lemma 4, the following theorem gives a convergence analysis of the IGPMHSS iteration.

Let Then we have the following result.

Theorem 5. Let and be two symmetric positive definite matrices. Let , with and symmetric positive definite and symmetric positive semidefinite, respectively, and let be a nonnegative constant and let be a positive constant. If is an iteration sequence generated by the IGPMHSS iteration method (Algorithm 1) and if is the exact solution of the system of linear equations (1), then it holds that where In particular, if then the iteration sequence converges to , where and .

In fact, replacing and in Lemma 4 with , , , and , we straightforwardly obtain the proof of Theorem 5.

Theorem 5 shows that the choices of the tolerances and are to make the IGPMHSS convergent. In fact, as mentioned previously, to get the convergence of the IGPMHSS iteration, the tolerances and are not required to approach zero as increases. However, the optimal tolerances and are not easy to analyze.

4. Numerical Experiments

In this section, we give some numerical experiments to demonstrate the performance of the GPMHSS and IGPMHSS methods for solving the linear system (1)-(2). Numerical comparisons with the MHSS and PMHSS methods are also presented to show the advantage of the GPMHSS method.

In our implementations, the initial guess is chosen to be and the stopping criteria for outer iterations (when MHSS, PMHSS, and GPMHSS methods are used as solvers) are

The preconditioner used in PMHSS method is chosen to be . For the sake of comparing, the corresponding preconditioners and used in GPMHSS method are chosen to be . Similarly, if the preconditioner used in PMHSS method is chosen to be , then corresponding preconditioners and used in GPMHSS method are chosen to be . Since the numerical results in [2, 3] show that the PMHSS method outperforms the MHSS and HSS methods when they are employed as preconditioners for the GMRES method or its restarted variants [28], we just examine the efficiency of the GPMHSS method as a solver for solving complex symmetric linear system (1)-(2) by comparing the iteration numbers (denoted as IT) and CPU times (in seconds, denoted as CPU) of the GPMHSS method with those of the MHSS and PMHSS methods.

Example 6 (see [2, 3]). The complex symmetric linear system (1)-(2) is of the following form: with where tridag , , and and are the first and the last unit vectors in , respectively. We take the right-hand side vector to be , with 1 being the vector of all entries equal to 1. Here and correspond to the five-point centered difference matrices approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions and periodic boundary conditions, respectively, on a uniform mesh in the unit square with the mesh-size .

As is known, the spectral radius of the iteration matrix may be decisive for the convergence of the iteration method. The spectral radius corresponding to the iteration method is necessary to consider. The comparisons of the spectral radius of the three different iteration matrices derived by MHSS, PMHSS, and GPMHSS methods with different mesh-sizes are performed in Tables 1 and 2. In Tables 1 and 2, we use the optimal values of the parameters and , denoted by for MHSS method, for PMHSS method, and , for GPMHSS method. These parameters are obtained experimentally with the least spectral radius for the iteration matrices of the three methods.

From Tables 1 and 2, one can see that with the mesh-size increasing, the trend of the experimentally optimal parameter of the MHSS method decreases. Whereas, with the mesh-size increasing, the experimentally optimal parameter of the PMHSS method may be changeless. Fixing the parameter in GPMHSS method, the trend of the experimentally optimal parameter of the MHSS method decreases. No matter what or , we observe that the optimal spectral radius for the iteration matrices of the three methods grows with problem size, and the optimal spectral radius of GPMHSS method is still smaller than those of MHSS and PMHSS methods. In this case, the GPMHSS method may outperform the MHSS and PMHSS methods. To this end, we need to examine the efficiencies of the MHSS, PHSS, and GPMHSS methods for solving the systems of linear equations , where is described above.

In Tables 3 and 4, we list the iteration numbers and computational times for MHSS, PMHSS, and GPMHSS iteration methods by using the optimal parameters in Tables 1 and 2. In Tables 3 and 4, β€œRES” denotes the relative residual error.

From Tables 3 and 4, we see that GPMHSS iteration method is the best among three methods in terms of the iteration numbers and computational time, and the PMHSS scheme requires fewer iteration numbers than the MHSS scheme. For the MHSS and GPMHSS methods, the number of iterations grows with the problem size. For the PMHSS method, its iteration numbers are relatively stable. That is to say, the PMHSS method does not have any growth in iterations numbers by increasing grid dimension. The presented results in Tables 3 and 4 show that in all cases GPMHSS is superior to another two methods in terms of the CPU time. That is to say, compared with the MHSS and PMHSS methods, the GPMHSS method may be given priority under certain conditions. In fact, the speed-up of GDMHSS/PMHSS with respect to MHSS is quite noticeable, where we define it by Obviously, the efficiency of GPMHSS is superior to that of MHSS and PMHSS.

In particular, here we test the efficiency of the GPMHSS method when and . In this case, some numerical results are obtained in Table 5. In Table 5, it is not difficult to find that the spectral radius grows with problem size. Simultaneously, the number of iterations grows with the problem size. Numerical results in Table 5 show that, under certain conditions, the GPMHSS method is feasible and efficient, compared with the MHSS and PMHSS methods.

As already noted, in the two half steps of the GPMHSS iteration, it is necessary to solve two systems of linear equations, whose coefficient matrices are and , respectively. This can be very costly and impractical in actual implementations. We use the IGPMHSS method to solve the systems of linear equations (1)-(2) in the actual implementations. That is, it is necessary to solve two systems of linear equations with and by using the IGPMHSS iteration. It is easy to know that and are symmetric positive definite. So, solving the above two subsystems, there can employ the CG method.

In our computations, the inner CG iteration is terminated if the current residual of the inner iterations satisfies (cf. Algorithm 1) where and are, respectively, the residuals of the th inner CG for and . is the th outer IGPMHSS iteration, and is a tolerance.

Some results are listed in Tables 6 and 7, which are the numbers of outer IGPMHSS iteration (it.s), the average numbers (avg1) of inner CG iteration for , and the average numbers (avg2) of CG iteration for .

In our numerical computations, it is easy to find the fact that the choice of is important to the convergence rate of the IGPMHSS method. According to Tables 6 and 7, the iteration numbers of the IGPMHSS method generally increases when decreases. This increase can probably be eliminated using a suitable preconditioner. It is noted that no preconditioning is used for these inner iterations in our numerical computations.

5. Conclusion

In this paper, we have generalized the MHSS method into the GPMHSS method for a class of complex symmetric linear systems. Theoretical analysis shows that for any initial guess the GPMHSS method converges to the unique solution of the linear system for a wide range of the parameters. Then, an inexact version of GPMHSS (IGPMHSS) has been presented and implemented for saving the computational cost. Numerical experiments show that the GPMHSS and IGPMHSS methods are efficient and competitive.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous referees for their helpful suggestions, which greatly improved the paper. This research was supported by the NSFC (no. 11301009), the Science and Technology Development Plan of Henan Province (no. 122300410316), and the Natural Science Foundations of Henan Province (no. 13A110022).