Abstract

A new preconditioner for symmetric complex linear systems based on Hermitian and skew-Hermitian splitting (HSS) for complex symmetric linear systems is herein presented. It applies to conjugate orthogonal conjugate gradient (COCG) or conjugate orthogonal conjugate residual (COCR) iterative solvers and does not require any estimation of the spectrum of the coefficient matrix. An upper bound of the condition number of the preconditioned linear system is provided. To reduce the computational cost the preconditioner is approximated with an inexact variant based on incomplete Cholesky decomposition or on orthogonal polynomials. Numerical results show that the present preconditioner and its inexact variant are efficient and robust solvers for this class of linear systems. A stability analysis of the inexact polynomial version completes the description of the preconditioner.

1. Introduction

Focus of this paper is the solution of the complex linear system given by , where the symmetric complex matrix has the property that can be written as with , two real symmetric positive semidefinite matrices (semi-SPD) and a symmetric positive definite (SPD) matrix. This kind of linear system arises, for example, in the discretization of problems in computational electrodynamics [1] or time-dependent Schrödinger equations, or in conductivity problems [2, 3].

If is Hermitian, a straightforward extension of the conjugate gradients (CG) algorithm can be used [4]. Unfortunately, the CG method cannot be directly employed when is only complex symmetric; thus, some specialized iterative methods must be adopted. An effective one is the HSS with its variants (MHSS), which need, at each iteration, the solution of two real linear systems. Other standard procedures to solve this problem are given by numerical iterative methods based on Krylov spaces and designed for complex symmetric linear systems: COCG [1], COCR [5], CSYM [6], and CMRH [7]. Some iterative methods for non-SPD linear systems like BiCGSTAB [8], BiCGSTAB() [9, 10], GMRES [11], and QMR [12] can be adapted for complex symmetric matrices [4, 13, 14].

In this work a preconditioned version of COCG and COCR is proposed: the aim is the solution of complex linear systems, where the preconditioner is a single preconditioned MHSS iteration (PMHSS). The PMHSS step may be too costly or impracticable, thus cheaper approximations must be used, in this work some approximations are considered, and a polynomial preconditioner as a possible approximation of the PMHSS step is presented.

Methods based on Hermitian and skew-Hermitian splitting (HSS) [1518] can be used as standalone solvers or combined (as preconditioner) together with CG like algorithms. The speed of convergence of CG like iterative schemes depends on the condition number of the matrix ; thus, preconditioning is a standard way to improve convergence [19, 20]. Incomplete LU is a standard and accepted way to precondition linear systems. Despite its popularity, incomplete LU is potentially unstable, is difficult to parallelize, and lacks algorithmic scalability. Nevertheless, when incomplete LU is feasible and the preconditioned linear system is well conditioned, the resulting algorithm is generally the best performing.

In this work we focus on large problems, where incomplete LU preconditioning is too costly or not feasible. In this case, iterative methods like SSOR are used as preconditioners, but a better performance is obtained using HSS iterative methods, which allow reducing the condition number effectively. However, HSS iterative methods need the solution of two SPD real systems at each step. Standard preconditioned CG methods can be used at each iteration [18] which can again be preconditioned with an incomplete Cholesky factorization, although for very large problems, the incomplete Cholesky factorization may be not convenient or not feasible. As an alternative, in the present paper a polynomial preconditioner is proposed, which allows solving the linear system for large matrices. Polynomial preconditioners do not have a good reputation as preconditioners [19, 20] and research on this subject was dropped in the late 80s. In fact, polynomial preconditioners based on Chebyshev polynomials need accurate estimates of minimum and maximum eigenvalues, while least squares polynomials were not computed using stable recurrences, limiting the degree of available stable polynomials [2125]. However, in the last years, polynomial preconditioner received attention again after the works of Lu et al. [26]. Notwithstanding anything contained above, here we propose as a preconditioner the use of a polynomial approximation of a modified HSS step. A specialization for Chebyshev and Jacobi orthogonal polynomials is discussed and the corresponding polynomial preconditioner is evaluated using a stable recurrence which permits using very high degree polynomials.

The paper has this structure. Section 1.1 describes the problem and gives a brief summary of existing methods for the resolution with the fundamental results and variants that lead to the present method; in particular, the HSS is considered. Section 2 shows how to use one step of the MHSS method as a preconditioner and gives a bound on the conditioning number of the MHSS iteration matrix. Section 3 explains the iterative solution method with the strategy to adopt when Cholesky factorization is possible or not. Section 4 presents a scale transformation of the system in order to move the eigenvalues to the range . Section 5 describes the polynomial preconditioner based first on least squares and then in terms of orthogonal polynomials and furnishes a stable recurrence for the computation of polynomial preconditioners for high degrees. The specialization for Chebyshev and Jacobi orthogonal polynomials is presented. Section 6 studies the numerical stability of this process and Section 7 shows some numerical results; Section 8 concludes the paper.

1.1. The MHSS Iterative Solver

The complex linear system , where is complex symmetric, is solved via an iterative method based on a splitting algorithm (HSS). The preconditioner requires to solve (each time it is applied) two real symmetric and positive definite (SPD) linear systems.

The HSS scheme can be summarized in this manner: rewrite the vector of the unknowns as the sum of real and imaginary part, , and accordingly, the right hand side , then the system is rewritten asso that the two steps of the modified HSS method proposed in [15] result infor suitable matrices and . The previous procedure can be rewritten as a single step of a splitting based scheme by posingThis iterative method converges if the iteration matrix , for example,has spectral radius strictly less than one. It is well known that the choice , for a given positive constant , yields the standard HSS method [1517] for which an estimate of the spectral radius is given bywhere are the eigenvalues of the SPD matrix . The optimal value for can also be computed [1517] and isFrom the previous formulas, it is clear that those computations rely on the knowledge (or estimate) of the minimum and maximum eigenvalue of the matrix , which is, in general, a hard problem.

Another possible choice for and is and which yields, when , a variant of the MHSS method by [1517]. In the next lemma, an upper bound of the spectral radius of the iteration matrix is given.

Lemma 1. Let and in (4) with a SPD matrix and a semi-SPD matrix, ; then the spectral radius of satisfies the upper boundand the minimum value of the upper bound is attained when where .

Proof. Substituting and in (4) yieldsIf is an eigenvalue of matrix , it satisfiesThus, defined asis a generalized eigenvalue; that is, it satisfies and it is well known that must be nonnegative. Computing from (10), the function is found to bewhich allows evaluating an upper bound of the spectral radius of :There are optimal values of and that minimize for and . To find them, set and with ; thenIf is fixed, the minimum of this last expression is for corresponding toThe minimum of is attained for , which corresponds to . The computation of is then straightforward.

In this setting, since are optimal, from now on it is assumed and therefore . Using these values, the two-step method (2) is recast as the one step method:where the simplified expressions for and areNotice that is well defined and nonsingular provided that is not singular. Thus the assumptions of Lemma 1 are weakened when , resulting in the next corollary.

Corollary 2. Let and be semi-SPD with not singular and and as defined in (16); then the spectral radius of satisfies the upper bound .

Remark 3. The spectral radius of the iteration matrix is bounded independently of its size; thus, once the tolerance is fixed, the maximum number of iterations is independent of the size of the problem.

The iterative method (15) can be reorganized in Algorithm 1.

;  ;
while    do
  Solve (;
  ;
  ;
end  while

Remark 4. The MHSS iterative solver of Algorithm 1 needs at each iteration the resolution of two real linear systems, respectively, for the real and imaginary part, whose coefficient matrices are SPD, namely, . For small matrices this can be efficiently done by using Cholesky decomposition. For large and sparse matrices a preconditioned conjugate gradient method is mandatory.

Although Algorithm 1 can be used to solve the linear system (1), better performance is obtained using one or more steps of Algorithm 1 not for solving the linear system (1) but as a preconditioner for a faster conjugate gradient like iterative solver such as COCG or COCR. The convergence rate estimation for these iterative schemes for a linear system depends on the condition number , where is the classic spectral norm of a matrix. The energy norm induced by the (real) SPD matrix is used to obtain the well known estimatewhere is the solution of the linear system. In general, conjugate gradient-like iterative schemes perform efficiently if a good preconditioner makes the system well conditioned.

2. Use of MHSS as Preconditioner

In this section the effect of a fixed number of steps of Algorithm 1, used as a preconditioner for the linear system (1), is analyzed in terms of the reduction of the condition number. Performing steps of Algorithm 1 with is equivalent to compute , whereMatrix can be interpreted as an approximation of matrix .

Thus, it is interesting to obtain an estimate of the condition number of the preconditioned matrix in order to check the effect of MHSS when used as preconditioner.

For the estimation, we need to recall some classical results about spectral radii and norms. For any matrix and any matrix norm, Gelfand’s Formula connects norm and spectral radius [27, 28]:Notice that when , for large enough, .

Lemma 5. Let so that is such that ; then for any satisfying there is an integer such thatwhere is the condition number with respect to the norm and is defined by (18).

Proof. Observe that ; hence using (18) the preconditioned matrix becomes . From Gelfand’s Formula (19) there exists such thatand from (21), by setting , the convergent series (see [29, 30]) gives a bound for the norm of the inverseThe thesis follows trivially from (21).

Corollary 6. Let so that has the property that ; then there exists a matrix norm such that the conditioning number of the matrix with respect to this norm satisfieswhere .

Proof. Recall that for any matrix and there exists a matrix norm such that . This is a classical result of linear algebra; see, for example, Section 2.3 of [29] or Section 6.9 of [30]. Thus, given and choosing there exists a matrix norm such that . The proof follows from Lemma 5.

From Corollary 2 using the Euclidean norm and choosing such that for , the condition number of the preconditioned matrix satisfies

This estimate shows that using steps of MHSS, with large enough, the condition number of the preconditioned system can be bounded independently of the size of the linear system. In practice, when the reduction of the condition number is enough; in fact, Corollary 6 shows that, using the appropriate norm, the condition number of the preconditioned linear system is less than , independently of its size.

Remark 7. From [31], when the condition number is large, the estimate of conjugate gradient (CG) iterations satisfies . The cost of computation is proportional to the number of iterations, whereas the cost of each iteration is proportional to , where is the cost of an iteration of MHSS used as preconditioner, relative to the cost of a CG step. Thus, using from (24), when is large enough, a rough estimate of the computational cost isFixing , the cost (25), as a function of alone, is convex and has a minimum for which satisfiesThe inverse function which satisfies is the right hand side of (26); moreover, is a monotone decreasing function so that also is monotone decreasing.
Table in equation (27) shows the constant as a function of giving the critical values of such that steps of MHSS are better than steps,This means that it is convenient to use steps in the preconditioner MHSS only if the cost of one step of MHSS is less than , that is, half of one step of the conjugate gradient method, a situation that never happens in practice.

According to Remark 7, only with is considered; that is, as preconditioner for the linear system (1).

3. Iterative Solution of Complex Linear System

From the previous section, it is clear that the use of one iteration step of MHSS is a good choice that lowers the condition number of the original complex linear system (1). The resulting preconditioner matrix is as defined in (16). Preconditioner will be used together with semi-iterative methods specialized in the solution of complex problems. Examples of those methods include COCG [4, 32] or COCR [5, 3234]. They are briefly exposed in Algorithms 2 and 3.

;
;
;
;
while     do
;
;
;
;
;
;
;
;
;
;
end while

;
;
;
;
;
while     do
;
;
;
;
;
;
;  
;
;
;
end while

There are also other methods available for performing this task, for example, CSYM [6] or QMR [35].

The application of the preconditioner for those methods is equivalent to the solution of two real SPD systems depending on , as in Remark 4. Of course, one can use a direct method [20, 3638], or a conjugate gradient method [18] with incomplete Cholesky factorizations, or approximate inverse as a preconditioner [20, 3943].

For very large linear systems may be expensive or impractical to be formed; moreover, well suited variants of the incomplete Cholesky factorization or of the sparse inverse must be considered [4446]. In the next section, a polynomial preconditioner based on orthogonal polynomials is presented; it will allow solving very large complex linear systems with a simple algorithm that can be implemented with few lines of code.

The suggested strategy to get the solution of the complex linear system of the form is to use an iterative method like COCG (Algorithm 2) or COCR (Algorithm 3) preconditioned with . The fully populated factorization of for large or huge linear systems is not practically computable so that some approximation must be used. The strategy can be split in two.(1)Compute by using iterative methods. This can be organized as the solution of two SPD real system, one for the real and one for the imaginary part. This systems can be efficiently solved by using preconditioned conjugate gradient.(2)Approximate with “small” and use as approximate preconditioner.Approximations of strategy can be used as preconditioners for strategy . In strategy the approximation must not be too far from the inverse of ; otherwise the convergence will fail. In general for strategy the choice of the preconditioner can be done as follows.(i)If the complete Cholesky of is available use as preconditioner.(ii)If the incomplete Cholesky of is computationally not too expensive and is “small” then use as preconditioner.(iii)If is “too large” or the solution of by means of PCG with as preconditioner is too costly, try some alternative. In particular try:(a)Approximate using sparse inverse [39, 4750] or algebraic multigrid [5153] or any other good approximation.(b)Use the polynomial preconditioner proposed in Section 5.Incomplete LU decomposition is easy to set up and exhibits good performances, while better performance may be obtained by using algebraic multigrid method (AMG); however, comparisons with AMG are difficult because of the large number of parameters to set up, like the depth and the shape of the W-cycle and the smoother (or relaxation method) and the coarse grid correction step.

If incomplete Cholesky fails or it gives poor preconditioning, we propose a simple but effective polynomial preconditioner for which we give details such as stability and convergence properties. Notice that the optimal preconditioner is problem dependent and a good preconditioner for a problem may be a bad preconditioner for another one. The polynomial preconditioner proposed in Section 5, without being optimal, is extremely simple to implement and easily parallelizable and ensures convergence also for very large linear systems. Finally, polynomial preconditioners have the following benefits with respect to incomplete Cholesky and algebraic multigrid.(i)If matrix-vector multiplication can be computed but the matrices and cannot be explicitly formed, the polynomial preconditioner is computable while incomplete LU and algebraic multigrid cannot be built.(ii)Polynomial preconditioners are easily parallelizable.(iii)The polynomial preconditioner never breaks down as it can happen to incomplete Cholesky or algebraic multigrid.(iv)The polynomial preconditioner can be extended also for problems with a singular matrix.

Strategy number 1, that is, computing by using iterative methods (e.g., PCG), is in general not competitive. In fact, to be competitive, the system should be solved with very few iterations, but this means that the preconditioner for is very good; therefore it can be used directly and successfully in strategy number 2.

4. Scaling the Complex Linear System

The polynomial preconditioner presented in the next section depends on the knowledge of an interval containing eigenvalues. Scaling is a cheap procedure to recast the problem into one with eigenvalues in the interval . Using the diagonal matrix , the linear system (1) becomeswhere is a real diagonal matrix with positive entries on the diagonal. The scaled system inherits the properties of the original and still has the matrices and semi-SPD with SPD. The next lemma shows how to choose a good scaling factor used forward.

Lemma 8. Let be a SPD matrix and a diagonal matrix with ; then the scaled matrix has the eigenvalues in the range .

Proof. Notice that is symmetric and positively defined and is similar to . Moreover, the estimate follows trivially.

Assumption 9. From Lemma 8, the linear system (1) is scaled to satisfy the following:(i)matrices and are semi-SPD;(ii)matrix is SPD with eigenvalues in .

5. Preconditioning with Polynomials

On the basis of the results of the previous sections with Assumption 9, the linear system to be preconditioned has the form with with and semi-SPD and SPD with eigenvalues distributed in the interval .

A good preconditioner for this linear system is one step of MHSS in Algorithm 1, which results in a multiplication by where . Here the following polynomial approximation of is proposed:The matrix polynomial must be an approximation of the inverse of , that is, , where is a polynomial with degree . A measure of the quality of the preconditioned matrix for a generic polynomial is the distance from the identity matrix:where is the spectrum of . If, in particular, the preconditioned matrix is the identity matrix then . Thus, the polynomial preconditioner should concentrate the eigenvalues of around in order to be effective.

A preconditioner polynomial can be constructed by minimizing of (30) within the space of polynomials of degree at most . This implies the knowledge of the spectrum of the matrix which is in general not available making problem (30) unfeasible. The following approximation of quality measure (30) is feasible:and it needs the knowledge of and an interval for , containing the spectrum of . The polynomial which minimizes for is well known and is connected to an appropriately scaled and shifted Chebyshev polynomial. The construction of such solution is described in Section 5.1 and was previously considered by Ashby et al. [21, 22], Johnson et al. [23], Freund [54], Saad [24], and Axelsson [19]. The computation of needs the estimation of a positive lower bound of the minimum eigenvalue of , which is, in general, expensive or infeasible. The estimate cannot be used because for any polynomial . A different way to choose is analysed later in this section. Saad observed that the use of Chebyshev polynomials with the conjugate gradient method, that is, the polynomial which minimizes the condition number of the preconditioned system, is in general far from being the best polynomial preconditioner, that is, the one that minimizes the CG iterations [24]. Practice shows that although nonoptimal, Chebyshev preconditioners perform well in many situations. The following integral average quality measure proposed in [2224, 55] is a feasible alternative to (31):The preconditioner polynomial proposed here is the solution of minimization of quality measure (31) or (32):Solution of problem (33) is detailed in the next sections. The proposed solution to the first problem is by means of the Chebyshev polynomials, while the solution of the second problem is done with the Jacobi weight.

5.1. Chebyshev Polynomial Preconditioner

The solution of minimization problem (33) with quality measure is well known and can be written in terms of Chebyshev polynomials [21, 22]:where is the th Chebyshev polynomial scaled in the interval . Polynomials satisfy the recurrencewhereFrom (34), the preconditioner polynomial becomesand from (35) it is possible to give a recursive definition for too.

Lemma 10 (recurrence formula for preconditioner). Given the polynomials defined by the recurrencethen the polynomials and satisfy the recurrenceswhereand satisfies the recurrence

Proof. Take the ratioand notice that for all . Recurrence for is trivially deduced. From and by using (41),dividing by recurrence (39) is retrieved. Polynomials and are trivially computed.

Using Lemma 10 the polynomial preconditioner (34) satisfies the recurrence (39) withwhere is used and is computed by solving recurrence (35) for ; that is,

Numerical stability of recurrence (44) is discussed in Section 6. The estimation of is the complex task and some authors perform it dynamically. As an alternative, the present approach is to move the eigenvalues of the coefficient matrix from the interval to a stripe , so that the condition number remains bounded. The value of is not determined from the estimate of the eigenvalues but from the degree of the preconditioner polynomial and from the amplitude of the stripe . Once is fixed, the higher the degree of the preconditioner, the lower the value of , which decreases to zero. Thus, if the degree of the preconditioner is high enough, the eigenvalues are moved in the interval . The important fact is that even if the degree is not high enough to move the complete spectrum, the majority of the eigenvalues are moved in the desired stripe, improving the performance of the conjugate gradient method. An idea of this behaviour is showed in Figure 1 on the right. The end of this section is devoted to the explicit expression of the value of computed backwards from the value of : once the maximum condition number is fixed, it is possible to increase the degree of the polynomial preconditioner so that decreases until the whole (or at least the most) spectrum of the matrix is contained in the specified range.

In the interval , Chebyshev polynomial is bounded in the range where and solving for gives

5.2. Jacobi Polynomial Preconditioner

The solution of minimization problem (33) with quality measure is well known and can be written in terms of Jacobi orthogonal polynomials [24].

Definition 11. Given a nonnegative weight function (see [23, 24, 56]) the scalar product and the relative induced norm are defined aswhere and are continuous functions. The orthogonal polynomials with respect to the scalar product are the polynomials which satisfy if .

The orthogonal polynomials with respect to the weightdefined in the interval , are the Jacobi polynomials and they satisfy the recurrence (see [57]):forThe class of polynomials of the form with can be thought as polynomials with ; thus, the minimization problem (33) for can be recast to the following constrained minimization for :The preconditioner polynomial is . Polynomial is expanded by means of Jacobi orthogonal polynomials with :Making use of the property of orthogonality with respect to the scalar product, one hasand thus the constrained minimum problem (51) is recast asProblem (54) is solved by using Lagrange multiplier with first order conditions resulting in

Remark 12. The solution (55) is only formal but barely useful from a computational point of view, because it requires computing explicitly the least squares polynomial. In fact, it is well known that the evaluation of a polynomial of high degree is a very unstable process. To make solution (55) practical, it is mandatory to obtain a stable recurrence formula that allows evaluating polynomials even of very high degree, for example, or more.

To find a recurrence for (55), it must be rewritten as a ratio of orthogonal polynomials as for the Chebyshev preconditioner (34). To this scope, some classical theorems and definitions on orthogonal polynomials are here recalled for convenience. Christoffel-Darboux formulas and Kernel Polynomials, here recalled without proofs (see [55, 57]), are used to build the recurrence.

Theorem 13 (Christoffel-Darboux formulas). Orthogonal polynomials with respect to the scalar product share the following identities:

Theorem 14 (Kernel Polynomials). Given orthogonal polynomials with respect to the scalar product , that is, with respect to weight function , then the polynomialsare orthogonal polynomials with respect to the scalar product defined as , that is, with respect to the weight function . Moreover .

With the formulas of Christoffel-Darboux (56) and , , it is possible to rewrite (55) asUsing the Kernel Polynomials of this last theorem, expression (58) becomeswhere are orthogonal polynomials with respect to the weight . In fact, the Kernel Polynomials with respect to satisfy .

Preconditioner polynomial can be computed recursively using Lemma 10, where coefficients , , , and are computed from , , and . Giventhe values of , , , and from Lemma 10 becomeThe only difficulty of the previous coefficients computation lies in the recursive solution of , which is here omitted for conciseness but that is a linear three-term recurrence with polynomial coefficients. Notice that ; thus the limit value of the above coefficients is evident.

5.3. Recurrence Formula for the Preconditioner

Looking at Algorithms 2 and 3, the polynomial preconditioner is applied to a vector, that is, . Thus, to avoid matrix-matrix multiplication, by defining and using Lemma 10 with (61), the following recurrence is obtained for :where and recurrence (62) is the proposed preconditioner with coefficients given by (44) for Chebyshev and (61) for Jacobi the polynomials. Equation (62) yields Algorithm 4.

;
;
for     do
  ;
  ;
  ;
end for
return ;

6. Numerical Stability

Algorithm 4, that is, the application of preconditioner given by (62) to a vector , also taking into account rounding errors, results inwhere are the errors due to floating point operations with as an upper bound of such errors. The cumulative error satisfies the linear recurrenceThe next definitions introduce the concept of generalized and joint spectral radius needed for the proof of the theorem of the matrix bound; they can be found in [58].

Definition 15. A matrix set is bounded if there is a constant such that for all . An invariant subspace for is a vector space such that for all . The set is irreducible if the only invariant subspace is or .

Definition 16. The generalized spectral radius and the joint spectral radius of any set of matrices are defined as

Theorem 17. Let be a bounded and irreducible set of matrices with ; then there is a constant such thatfor all .

Proof. It is Theorem 2.1 by [58] with a slight modification to match the present case.

Theorem 18. Recurrence (64) satisfieswhere is the dimension of the linear system, is the amplitude of the stripe for the eigenvalues, and is an unknown constant coming from the norm inequalities which is found experimentally to be small.

Proof. The matrix in (64), by Assumption 9, is SPD with eigenvalues in . Thus , with orthogonal matrix, that is, , and diagonal. Multiplying on the left the recurrence (64) by , the following error estimate is obtained:Focusing on th component of the transformed error, and , a scalar recurrence is obtained:Recurrence (69) is restated in matrix form aswith initial data . Notice that is bounded byand thus, and . From (70) it follows thatThe set is bounded and irreducible; each matrix has spectral radius strictly less than 1 (see Lemma 19 for a proof); therefore the joint spectral radius is less than 1. From (72), with Theorem 17 using the infinity norm,and, because of , it follows that . A bound of the term is done asThis shows that the error grows at most linearly.

The above relation shows that the recurrence is at worst linearly unstable; that is, the error grows at most linearly. The existence is proved in the works of Rota and Strang [59] where the concept of joint spectral radius is introduced. The determination of is not possible but practice reveals that it is small. In conclusion it is possible to employ even a very high degree polynomial preconditioner with a stable computation.

Lemma 19. Given , , from (44) or (61), respectively, for the Chebyshev and the Jacobi preconditioner, then the roots and of the characteristic polynomial of homogeneous recurrence (69), that is,satisfy and for all and .

Proof. Consider first the coefficients for the Jacobi polynomials defined in (61). If the roots are complex, then they must be conjugate; thus and , because the coefficients of the polynomial are real. In that case, the constant term of the polynomial is equal to the square of the modulus of the roots, ; thus it is easy to see that for all . Suppose now that the two roots and are real; multiplying the characteristic polynomial by , yields, after some manipulation,forwith , , , and strictly positive for all . The discriminant of the equation is and represents a convex parabola because . Its minimum is obtained for , which gives and so complex roots, but this case was already considered. Hence we can set . The maximum of is achieved at one of the extrema of the interval of definition of . A quick calculation shows that is maximum for , yielding a value of . Using and it is possible to bound the roots and :The previous inequalities prove the lemma for the Jacobi preconditioner. Now consider the case of the coefficients of the Chebyshev polynomials defined in (44). Recall the expression for , and notice that, for , is bounded in , so thatis positive for even and negative for odd ; moreover, is monotone increasing for . In the case of complex roots, it was already shown that , withIn the rest of the proof it is useful to consider also the ratio , for , that corresponds to . The coefficients of (44), observing that , are simplified inPolynomial (75) is rewritten asand its roots are (using )Looking at the discriminant , the minimum of the associated convex parabola is for . The corresponding value isThe value at the right extremum is also negative:and in factFor , with some manipulations, the roots of (82) areMoreover, ; thus there exists such that . Thus for the roots are complex conjugate and with modulus less than . For , where the roots are real, and its derivative satisfyand thus for we have . Hence in a neighbourhood of , , and for there are no roots equal to 1 or 0; thus the roots of (82) are bounded in . In fact, by contradiction, let be a root; then by (82)Moreover is never a root of (82).
Thus the roots are bounded in the interval for and .

7. Numerical Tests

In this section a group of tests is proposed for the solution of a complex linear system of the form (1); that is, , where and are semi-SPD with SPD. The solvers used are COCG (Algorithm 2), COCR (Algorithm 3), and the standard Matlab’s QMR. The first two are tested together with different preconditioners such as ILU/ILU0 and with the present polynomial preconditioners. As a comparison QMR is also added, with and without preconditioning. When feasible, the direct LU factorization is used; then two variants are proposed, an incomplete LU and an ILU with threshold. More in detail, Table 1 presents the preconditioners:(i)LU, the complete LU decomposition, for the matrix ,(ii)ILU0, the incomplete ILU(0), for the matrix , the threshold used is ,(iii)ILU, the incomplete ILU(0), for the matrix ,and the use of the previous preconditioners with COCG, COCR, and QMR:(i)COCG-ILU0 and COCG-ILU, COCG solver preconditioned with ILU0 and ILU,(ii)COCR-ILU0 and COCR-ILU, COCR solver preconditioned with ILU0 and ILU,(iii)QMR, standard QMR implementation of Matlab, with no preconditioning,(iv)QMR-ILU0 and QMR-ILU, standard QMR implementation of Matlab, with ILU0 and ILU preconditioning.Table 2 presents the results of COCG and COCR iterative solver with the preconditioner MHSS approximated with(i), COCG-MHSS-IC0, and COCG-MHSS-IC the complete and incomplete Cholesky decompositions for the preconditioner defined in (16), threshold used for IC being ;(ii), COCR-MHSS-IC0, and COCR-MHSS-IC the complete and incomplete Cholesky decompositions for the preconditioner defined in (16), threshold used for IC being .Table 3 presents the results of COCG and COCR iterative solver with the preconditioner of Section 5:(i)COCG-MHSS-J, COCG-MHSS-C the Jacobi and Chebyshev polynomial approximation of MHSS used with COCG iterative method;(ii)COCR-MHSS-J, COCR-MHSS-C the Jacobi and Chebyshev polynomial approximation of MHSS used with COCR iterative method;The Incomplete LU decomposition was computed with the standard Matlab command ilu with parameters ’type’ = ’crout’, ’droptol=1e-5’; the ILU0 was computed ilu with parameters ’type’ = ’nofill’; for the Cholesky decomposition was used ichol with parameters ’type’ = ’ict’, ’droptol=1e-5’; for the IC0 the parameter used was ’type’=’nofill’. The degrees used for the polynomial preconditioner are , , , , and . Due to the lack of complex symmetric matrices with SPD real and imaginary part, it was decided to combine two real SPD matrices of not too far dimension (eventually padding with zeros to match the size of the biggest one). The real SPD matrices used are summarized in Table 4 and can be found on the NIST “Matrix Market” Sparse Matrix Collection [60] or on University of Florida Sparse Matrix Collection [61]. As usual “NNZ” means number of nonzero elements and it is understood that the matrices are square; hence only the number of rows is reported.

These matrices are used paired where the first matrix of the pair corresponds to the real part and the second to the imaginary part. If the dimensions disagree, the smallest matrix is padded with zero rows and columns up to the size of the biggest one. The pairing of the matrices with the name of the corresponding test is resumed in Table 5.

The right hand side used for all tests, unless explicitly written, is assumed to be , that is, for all components. A test from a real application is found in [2], from which the complex symmetric SPD matrices are provided with a specific right hand side. Size and name of the matrices are resumed in Table 6.

List of the tests is as follows: for the first group (T tests) the right hand side was ; for the second group (S tests) the right hand side was the one prescribed in the paper [2].

7.1. Analysis of the Experiments

From the comparison of Tables 1, 2, and 3 it is clear that there is no preconditioner that is always better than the others. The optimal preconditioner is in general problem dependent; therefore, for general preconditioners, a good quality measure is the robustness over different problems. Table 3 shows that the strategy presented in Section 2 is effective. In fact, when the ILU factorization is available and the iterations converge, incomplete factorization preconditioner is faster than polynomial preconditioner, but the proposed polynomial preconditioner is an effective alternative when ILU is not available or when it is not sufficient as preconditioner. This is true also for the relatively small matrices with 5000 rows, where the number of iterations of the ILU-based methods is comparable with the polynomial preconditioners of low degree, for example, 10. Rising the degree of the polynomial corresponds to lowering the number of iterations needed by COCG/COCR. It is also apparent that it is not possible to go below a certain number of iterations even with a very high degree polynomial; this is evident, for example, in test T16k of Table 3 with both COCG and COCR and with both preconditioners. This is explained from the fact that the condition number of the preconditioned system is independent of the system size. Another behaviour that is common to most of the tests is the better performance of the COCR over the COCG: this can be appreciated looking at Figure 2 and Figure 3. They show the history of the residual for each iteration of both methods with the Jacobi and the Chebyshev preconditioners. In both cases the decrease of the residual is always slower in the COCG than in the COCR; also the number of iterations is higher.

8. Conclusions

A polynomial preconditioner was presented for the solution of the linear system , for complex symmetric, that is, such that , where , are real symmetric positive semidefinite matrices (semi-SPD) and is symmetric positive definite (SPD). Typical problems of this form come from the field of electrodynamics, where the involved matrices are complex but not Hermitian and standard methods cannot be used directly. This algorithm is suitable for large matrices, where Cholesky decomposition, or its inexact form, are too costly or infeasible. It works as a polynomial approximation of a single step of the MHSS method, but it is successfully applied as preconditioner of conjugate gradient-like methods; in particular it is showed how to use it together with COCG or COCR. Following the trend of the last years, but being aware of the criticism that arose in the 80s, the proposed new preconditioner is computed as a recurrence of orthogonal polynomials and is proved to be stable. This allows employing polynomials of very high degree and numerical tests confirm the expected theoretical good performances.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors wish to thank Professor R. Specogna for providing the matrices used for the Eddy Current test problems S13k, S32k, and S500k. They are also grateful for the positive and constructive feedback in testing the presented preconditioners on real problems.