Abstract

We study two parameterized preconditioners for iteratively solving the saddle point linear systems arising from finite element discretization of the mixed formulation of the time-harmonic Maxwell equations in electromagnetic problems. We establish some spectral properties of the preconditioned saddle point matrices, involving the choice of the parameter. Meanwhile, we provide some results of numerical experiments to show the effectiveness of the proposed parameterized preconditioners.

1. Introduction

The following time-harmonic Maxwell equations in lossless media and perfectly conduction boundaries [15] are partial differential equations of the mixed form with constant coefficients: find the vector field and the multiplier such that

Here, is a simply connected polyhedron domain with a connected boundary , and denotes the outward unit normal on . The datum is a given generic source (not necessarily divergence free), and the wave number satisfies , where is the frequency, and is a positive permeability parameter.

If the lowest order Nédélec elements of the first kind [1] are used for the approximation of the electric field and the standard nodal elements for the multiplier, discretizing the problem (1), we derive the approximation solution of (1) through solving the saddle point linear system of the form where and are finite arrays representing the finite element approximations, and is the load vector associated with . The matrix is symmetric positive semidefinite with nullity and corresponds to the discrete curl-curl operator; is a discrete divergence operator with full row rank.

For large, sparse, or structured matrices iterative methods are the interesting alternative. Stationary iterative methods solve a linear system (2) with an operator approximating the original one; based on a measurement of the error in the result (the residual), form a “correction equation” for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices. Examples of stationary iterative methods are the Jacobi method, Gauss-Seidel method and SOR method [1, 2, 69], and HSS splitting iterative method [7, 10]. However, Krylov subspace methods work by forming an orthogonal basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence). The approximations to the solution are then formed by minimizing the residual over the subspace formed. Krylov subspace methods apply techniques that involve orthogonal projections onto subspaces of the form where is the th Krylov subspace associated with and (see, e.g., [9, 11, 12]).

The basic algorithm within this class is the conjugate gradient (CG) method which has the nice properties that it uses only three vectors in memory and minimizes the error in the norm. However, the algorithm mainly performs well if the matrix is symmetric and positive definite. In cases where one of these two properties is violated, CG may break down. For nonsymmetric or indefinite linear systems, CG can be applied to the normal equations since the resulting linear system becomes (positive) definite. Upon application of CG to the normal equations, CGNE and CGNR are obtained [12]. The CGNE and CGNR methods, transforming the system to a symmetric definite one and then applying the conjugate gradient method, is attractive for its coding simplicity. The iterations are guaranteed to converge. The drawback is that the condition number of the normal equations equals the square of the condition number of , slowing down the convergence drastically.

MINRES can also be used to solve indefinite symmetric linear systems, as well as it generalization to the nonsymmetric case, GMRES [9, 1214]. Both algorithms have the minimization property, but GMRES uses long recurrences. GMRES has the advantage that theoretically the algorithm does not break down unless convergence has been reached. The main problem in GMRES is that the amount of storage increases as the iteration number increases. Therefore, the application of GMRES may be limited by the computer storage. To remedy this problem, a restarted version, GMRES(), can be utilized. Since restarting removes the previous convergence history, GMRES() is not guaranteed to converge. There is no specific rule to determine the restart parameter . In cases characterized by superlinear convergence, should often be chosen very large which makes restarting much less attractive. Another way to remedy the storage problem in GMRES is by including a so-called “inner iteration” as in GMRESR and FGMRES [9, 12, 14].

The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as GMRES (alternatively, preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area [3, 4, 8, 9, 13, 1523]. Generally, preconditioning attempts to improve the spectral properties of the system matrix. For symmetric problems, the rate of convergence of Krylov subspace methods like CG or MINRES depends on the distribution of the eigenvalues of . A key for the rapid convergence of an iterative method for a linear system of the form is the availability of an effective preconditioner . Each step of an outer iteration for solving the preconditioned linear system requires the solution of an inner linear system whose coefficient matrix is . Therefore, convergence of the outer iteration is fast if the eigenvalues of the preconditioned matrix are clustered, but careful attention must be paid to the conditioning and eigenvalue distribution of the matrix itself, which determine the speed of convergence of the inner iteration; see [8] for a comprehensive survey.

For solving the saddle point linear system (2), Greif and Schötzau [19] presented a block diagonal preconditioner where is a symmetric positive definite. In 2007, Greif and Schötzau [3] applied this type block diagonal preconditioner to the saddle point linear systems arising from the discretized time-harmonic Maxwell equations in mixed form (1). Rees and Greif [20] studied a block triangular preconditioner

which has the property that, the more the ill-conditioned block of the saddle point matrix is, the faster a minimum residual solver, such as MINRES, converges. In 2008, Cao [21] gave two augmentation block triangular preconditioners And he has shown that if the nullity of is , then the preconditioned matrices and have only three distinct eigenvalues , and , , respectively. Thus, the preconditioned GMRES with either augmentation block triangular preconditioner converges within three iterations.

Based on the preconditioners and , we study two generalized block triangular preconditioners for iteratively solving the saddle point linear systems arising from finite element discretization of the mixed formulation of the time-harmonic Maxwell equations in electromagnetic problems. Spectral properties and computational performance of the generalized block triangular preconditioners are discussed in detail, involving the choice of the parameters that are considered. Meanwhile, we give the optimal parameter in practice. Finally, numerical experiments show the effectiveness of the proposed preconditioners.

The remainder of this paper is organized as follows. In Section 2, the new block triangular preconditioners with one parameter are proposed, and the spectral distribmtion of the new preconditioned matrices is analyzed. In Section 3, numerical experiments are provided to validate our results in Section 2. Finally, we draw some conclusions.

2. Block Triangular Preconditioners

It is well known that characterizing the rate of convergence of nonsymmetric preconditioned iterations can be a difficult task. In particular, eigenvalue information alone may not be sufficient to give meaningful estimates of the convergence rate of a method like preconditioned GMRES [9, 24]. Nevertheless, experience shows that, for many linear systems arising in practice, a well-clustered spectrum (away from zero) usually results in rapid convergence of the preconditioned iteration.

Next we develop some estimates for the eigenvalues of the preconditioned matrices and , assuming exact solves for the block . We show that, for this “ideal” version of the preconditioner, the eigenvalues of the preconditioned matrix become tightly clustered around 1 when the nullity of is .

2.1. Spectral Properties of the Preconditioned Matrix

We let represent the saddle point matrices of (10). We assume that is symmetric and positive semidefinite with nullity and that is of size () and has full row rank. Note that the assumption that is nonsingular implies that and , which we use in our analysis below.

We present the following block triangular preconditioner: We will study spectral properties and computational performance of the preconditioner . Also numerical experiments show that the preconditioner with a parameter is more efficient than both and if it is proper to choose the value of the parameter. Following the spirit of the proof of [11, 25] we give the following result to describe the spectral distribution.

Theorem 1. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity , and is an eigenvalue of geometric multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Proof. Suppose that is an eigenvalue of , whose eigenvalue is . Then we have Furthermore, it satisfies the generalized eigenvalue problem or As is nonsingular, . The second equality gives , and substituting it into the first equality gives If , then (20) implies that from which it follows that is an eigenvalue of of geometric multiplicity .
If , then (14) implies that from which we obtain that are two eigenvalues of of geometric multiplicity .
We have determined eigenvalues. Now we consider the remaining eigenvalues of .
Suppose that ; then from (14) we have where , which implies that Since is assumed to be nonsingular, the matrix pencil is regular (cf. [11]). Thus, the generalized eigenvalue problem (17) is well posed. It is easy to see that is a generalized eigenvalue of geometric multiplicity and is a generalized eigenvalue of geometric multiplicity . The remaining generalized eigenvalues of (17) determine the remaining eigenvalues of by (18).

If and apparently reduce to , then we have the following corollary.

Corollary 2. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity , and is an eigenvalue of geometric multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Corollary 3. Let be nonsingular, and its block is singular with nullity . Then the preconditioned matrix has precisely three distinct eigenvalues: of geometric multiplicity and and both of multiplicity .

If and apparently reduce to , then we have the following corollary.

Corollary 4. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity , and is an eigenvalue of geometric multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Corollary 5. Let be nonsingular, and its block is singular with nullity . Then the preconditioned matrix has precisely three distinct eigenvalues: of geometric multiplicity and and both of multiplicity .

We next consider another block triangular preconditioner We give the following result to describe the spectral distribution of the preconditioned matrix .

Theorem 6. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity , and is an eigenvalue of geometric multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Proof. Suppose that is an eigenvalue of , whose eigenvalue is . Then we have Furthermore, it satisfies the generalized eigenvalue problem
or As is nonsingular, . The second equality gives , and substituting it into the first equality gives It is straightforward to see that any vector satisfies (29) with , and thus is an eigenvalue of . We claim that the eigenvalue has geometric multiplicity .
If , then from (29) we have
If , then from (30) we have From (31), we must have . Since is singular with nullity , hence, we know that is an eigenvalue of of geometric multiplicity .
We have determined eigenvalues. Now we consider the remaining eigenvalues of .
Suppose that ; then from (30) we have where , which implies that . Since is assumed to be nonsingular, the matrix pencil is regular (cf. [11]). Thus, the generalized eigenvalue problem (32) is well posed. It is easy to see that is a generalized eigenvalue of geometric multiplicity , and is a generalized eigenvalue of geometric multiplicity . The remaining generalized eigenvalues of (32) determine the remaining eigenvalues of by .

If and apparently reduce to , then we have the following corollary.

Corollary 7. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity , and is an eigenvalue of geometric multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Corollary 8. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity , and is an eigenvalue of geometric multiplicity .

If , then we have the following corollary.

Corollary 9. Let be nonsingular, and its block is singular with nullity . Then is an eigenvalue of of geometric multiplicity . The remaining eigenvalues satisfy the relation where are some generalized eigenvalues of the following generalized eigenvalue problem:

Corollary 10. Let be nonsingular, and its block is singular with nullity . Then the preconditioned matrix has only precisely one eigenvalue: of geometric multiplicity .

Remark 11. From Corollaries 8 and 10, if the nullity of is , it is readily seen that the preconditioned matrix has precisely two distinct eigenvalues and that the preconditioned matrix has only precisely one eigenvalue. Thus, we know that any preconditioned Krylov subspace method such as GMRES terminates in at most two steps if round-off errors are ignored.

Remark 12. From the proof of Theorems 1 and 6, we see that we have not relied on specific properties; Theorems 1 and 6 hold for any positive definite matrix . In the next section, we will discuss the choices of .

2.2. Choices of the Weight Matrix

In this section we will discuss the choice of the weight matrix . Given a weight matrix, an efficient method of factoring or iteratively solving systems with the preconditioner must be sought. These considerations are motivated by the fact that each iteration of a preconditioned Krylov subspace method requires solutions to linear systems of the forms and ; based on the block structure of and , this requires solving systems with and .

If is diagonal or block diagonal with small blocks, the matrix is also going to be sparse. A simple, one-parameter choice is a scaled identity. Letting with , could be chosen so that the augmenting term is of norm comparable to . See, for example, [18, 20] for a general algebraic discussion. For solving and , iterative method is possible. In an iterative scheme the inner iteration can be solved using the preconditioned GMRES method based on incomplete Cholesky factorization.

In fact, for , it follows from two identities that the action of the preconditioner on a given vector requires one application of and one sparse matrix-vector product with . Clearly, the main issue is how to solve linear systems with coefficient matrix . For large problems these have to be solved by an inner iterative method. Even though the inner solvers need not be performed to high accuracy, developing a robust and efficient iterative method for such problems is a nontrivial task. An effective multigrid method has been developed in [12]. We will further discuss the issue of inexact solvers in the section on numerical experiments.

Finally, we mention that other choices of are possible. For example, setting has the advantage that the matrix is an orthogonal projector onto the range of , which is orthogonal to the null space of , and since the null space of does not intersect with the null space of either, such a choice of is viable. See [11, 18, 20] for other choices of .

3. Numerical Experiments

In this section, we illustrate the performance of our preconditioning approach on Maxwell equation in which the blocks of the associated matrices are highly singular. Our results include iteration counts, condition counts, and some timings and eigenvalue plots. All the numerical experiments were performed with MATLAB 7.0. The machine we have used is a PC-AMD, CPU T7400 2.2 GHz process. We first consider a finite element discretization of the time-harmonic Maxwell equations (2) with wave numbers in a square domain [17] and derive the saddle point linear system of the form (2).

In our first numerical examples the matrix in the augmentation block preconditioners is taken as , whereas the positive parameter is taken as (cf. [18])

Figures 1, 2, 3, and 4 display the eigenvalues of the preconditioned matrices and for different values of . As predicted by the theories and Corollaries 25 and Corollaries 710, from these figures we can see that the higher the nullity of the block is, the stronger the eigenvalues of the preconditioned matrices are clustered. They are extremely close to 1. Note that the clustering of the spectrum away from the zero eigenvalue suggests that GMRES with the block triangular preconditioners will converge fast.

3.1. Exact Solvers

In this section we first study the effectiveness of the preconditioner with “exact” solvers: that is, linear systems with coefficient matrix are solved by a direct sparse LU factorization in combination with appropriate sparsity-preserving orderings.

In Tables 14, Dcond, PreDcond, and PreDcond1 denote the condition numbers of the system matrices , , and , respectively. Iter1, Iter2, and Iter3 denote the iteration counts for GMRES of , , and , respectively. As is evident, our solver performs extremely well. From Tables 1 and 2, we can see that two types of iteration numbers with the preconditioners and are slightly changed by the change of parameter . From Table 1, augmentation block triangular preconditioners show more effective performance than those of augmentation block triangular preconditioners (). From Tables 3 and 4, we can see that the preconditioners and greatly reduce the condition counts of coefficient matrix , and iteration numbers of the preconditioned GMRES with the preconditioners and are greatly reduced by the change of grids. Moreover, from Tables 3 and 4, we can know that the preconditioned GMRES method with the preconditioner is more effective than the preconditioned GMRES method with the preconditioner .

3.2. Inexact Solvers

In practice, using exact solvers in the application of the preconditioner may be too expensive. Here we consider replacing the exact solvers with inexact ones, obtained via an inner preconditioned GMRES iteration. In this paper we explore the use of drop tolerance-based incomplete LU as the preconditioner for the inner iteration. During implementation of our augmentation block preconditioners, we need the operation for a given vector or, equivalently, to solve the following equation: for which we use an incomplete LU factorization of with drop tolerance , where .

4. Conclusions

In this paper, we have considered preconditioned iterative methods applied to discretizations of the time-harmonic Maxwell equations in electromagnetic problems. We have analyzed the spectral properties as well as the computational performance of two types of block triangular augmentation preconditioners. Complete theoretical analysis shows that all eigenvalues of the preconditioned matrices are strongly clustered. A good parameter choice may substantially reduce the iteration numbers and condition counts. Especially, we have shown that, in cases where the block has high nullity, convergence for each of the two preconditioned GMRES iterative methods is guaranteed to be almost immediate. We have compared the performance of various types of preconditioners with regard to the grids, condition counts, the time step, and other problems parameters. Finally, the approach that we have investigated is parameter dependent, and it would be desirable to explore choices that reduce the overall computational cost.

Acknowledgments

This research was supported by the National Natural Science Foundation of China (no. 10826056, no. 10901056, and no. 11071079) and Ningbo Natural Science Foundation (2012A610037). The author is supported by the National Natural Science Foundation of China (no. 11071079, 11201362) and Ningbo Natural Science Foundation (2012A610037).