Abstract
We provide new preconditioners with two variable relaxation parameters for the saddle point linear systems arising from finite element discretization of time-harmonic Maxwell equations in mixed form. The new preconditioners are of block-triangular forms and Schur complement-free. They are extensions of the results in Cheng et al., 2009, Grief and SchΓΆtzau, 2007, and Huang et al., 2009. Theoretical analysis shows that all eigenvalues of the preconditioned matrices are tightly clustered, and numerical tests confirm our analysis.
1. Introduction
We consider the preconditioning techniques for solving the saddle point linear systems arising from finite element discretization of the following time-harmonic Maxwell equations in mixed form [1β5]: find the vector field and the Lagrangian multiplier such that Here, is a simply connected polyhedron domain with a connected boundary , and denotes the outward unit normal on . The datum is a given source (not necessarily divergence-free), and the wave number , where is the frequency, and and are positive permittivity and permeability parameters, respectively.
In recent years, there have been many techniques for solving Maxwell equations, such as the geometry multigrid methods [6β8], algebraic multigrid methods [9], domain decomposition methods [4, 10β13], Nodal auxiliary space preconditioning methods [14], and the solution methods to the corresponding saddle-point linear systems [2, 3, 15]. We can also use Uzawa-type iterative methods [16, 17] and preconditioned Krylov subspace methods [18β24] to solve the saddle-point linear systems. Based on the previous works in [2, 3, 15], we will further study solution methods for the saddle-point linear systems in this paper.
Using NΓ©dΓ©lec elements of the first kind [25β27] for the approximation of the vector field and standard nodal elements for the Lagrangian multiplier yields the following saddle-point linear system: where and are finite arrays, and is a load vector associated with . The matrix is symmetric positive semidefinite with nullity and corresponds to the curl-curl operator; is a discrete divergence operator with full-row rank, and is a vector mass matrix.
For convenience, we denote the standard Euclidean inner product of vectors by and the null space of a matrix by null . For a given positive (semi)definite matrix and a vector , we define the (semi)norm:
The matrices and have the following stability properties [3]. Let . Then there exists an , , such that where . Matrix satisfies the discrete inf-sup condition: where the inf-sup constant is only dependent on the domain .
If the wave number , then the block of (1.2) is indefinite. For difficulty and corresponding solution methods of this problem, we refer to [18, 28]. Recently, by using the spectral equivalent properties similar to [4], Grief and SchΓΆtzau [3] construct the block-diagonal preconditioner: where is the discrete Laplace operator introduced in [3], , and is a symmetric positive definite block-diagonal matrix. As is augmentation-free and Schur complement-free, this approach overcomes the difficulty in forming the Schur complement in general. However, the computational work of may be too large. Using the fact that the matrices and are spectrally equivalent, [3] considers the following preconditioner: and shows that the eigenvalues of the preconditioned matrix are tightly clustered.
Based on the work of Grief and SchΓΆtzau [3], [2] gives block-triangular Schur complement-free preconditioners for the linear system (1.2). And it is shown that all eigenvalues of the proposed block-triangular preconditioning matrices are more tightly clustered. Compared with the restriction in [3], [2] considers the general case . Furthermore, [15] provides block-triangular preconditioners when with two variable relaxation parameters.
Based on the previous work [2, 3, 15], mentioned above, in this paper we are devoted to give new preconditioners with two scaling parameters. The new block triangular preconditioners in the general case contain the preconditioners discussed in [2]. Theoretical analysis shows that all eigenvalues of the preconditioned matrices are tightly clustered. Numerical experiments demonstrate efficiency of the new method and show that preconditioner is more efficient than .
The remainder of the paper is as follows. In Section 2, we establish new block-triangular preconditioners for the linear systems (1.2) in the general case , and then the corresponding spectral analysis is presented. In Section 3, we provide numerical examples to examine our analysis. Finally, some conclusions are drawn in Section 4.
2. New Block-Triangular Preconditioners for Any
We consider the saddle-point linear system (1.2) arising from the discretized time-harmonic Maxwell equations in mixed form (1.1) and assume that is not an eigenvalue and .
Grief and SchΓΆtzau [3] provide the block-diagonal Schur complement-free preconditioner as in (1.6). Using the fact that the matrices and are spectrally equivalent, the argumentation-free and Schur complement-free preconditioner is defined in (1.7). Spectral analysis shows that the eigenvalues of the preconditioned saddle-point matrices and are strongly clustered when is small.
Reference [2] provides the block-triangular Schur complement-free preconditioners for the linear system (1.2). In particular, they considered preconditioning matrices for the general case with where .
For , [15] provides the block-triangular preconditioner for linear system (1.2): where and .
Based on the works in [2, 3, 15], we provide the following new block-triangular Schur complement-free preconditioners and : where and are scaling parameters. It is interesting to note that when parameters and , and apparently reduce to and , respectively. We also see that when , the preconditioner in (2.3) () is different from in (2.2) ().
We stress that is not the preconditioner we eventually use in actual computation. It is only introduced to lay theoretical basis and motivation for the preconditioner in (2.4), which we will use in practice. We note that the (1,1) block in is symmetric positive definite for is sufficiently small [3]. But this is not true when is large enough. However, this situation may not appear in the actual preconditioner . The (1,1) block in is always symmetric positive definite when . In this paper, we will apply the BiCGSTAB with the preconditioner as an outer solver for the saddle-point system (1.2). Then, the overall computational cost of solution procedure relies on how to efficiently solve the linear systems and , which are called by inner solvers. For the linear system arising from a standard scalar elliptic problem, many efficient solution methods exist. On the other hand, for solving the linear system , we refer to [6, 8, 9, 14], and some detailed numerical examples are provided in [29].
For the spectral analysis, we recall some results which are contained in the following lemma.
Lemma 2.1 (see [3]). The following relations hold: (i); (ii); (iii).
Theorem 2.2. Let be the saddle-point matrix in (1.2). Then the matrix has two distinct eigenvalues, which are given by with the algebraic multiplicities n and m, respectively.
Proof. Suppose that is an eigenvalue of , whose eigenvector is . Then the corresponding eigenvalue problem is
From the second row we can obtain . By substituting it into the first row we have
It is straightforward to see that any vector satisfies (2.7) with , so is an eigenvalue of . By the similar technique of linear independence considerations from [3], we can demonstrate that the eigenvalue has algebraic multiplicity .
Since there are linearly independent null vectors of , by Lemma 2.1,
By Lemma 2.1 (ii) and (iii), and using the inner product in (2.7) with , we have
Since , from (2.9) we can obtain that is another eigenvalue of , and we claim that the eigenvalue has algebraic multiplicity .
Corollary 2.3. Let . Then the corresponding preconditioned matrix has only one eigenvalue of algebraic multiplicity .
Proof. From Theorem 2.2, we can easily obtain the corresponding conclusion.
Remark 2.4. From Theorem 2.2, we demonstrate that the preconditioned matrix has precisely two distinct eigenvalues. Then if Krylov subspace methods are used to solve (1.2) with as a preconditioner, the iteration will require merely two steps if round-off errors are ignored [30]. And from Corollary 2.3, for any , we can find a number which makes the preconditioned matrix have only one eigenvalue. Therefore, we can demonstrate that our preconditioners are more efficient than the block-triangular preconditioner proposed in [2].
Remark 2.5. From (2.3) we know that if , the new preconditioner reduces to the diagonal preconditioner : Then we can use MINRES to solve the linear system (1.2).
Theorem 2.6. Let be the saddle-point matrix in (1.2). Then are the eigenvalues of , having algebraic multiplicity m. The rest of the eigenvalues satisfy where is defined as in (1.4).
Proof. Suppose that is an eigenvalue of , whose eigenvector is . Then the corresponding eigenvalue problem is
From the second row we can obtain . By substituting it into the first row we have
Consider the linearly independent null vectors of , by Lemma 2.1 (i),
where null(A) and null(B). By Lemma 2.1 (ii) and (iii), and taking the inner product in (2.14) with , we obtain
Since , and are two eigenvalues of and by the similar technique of linear independence considerations from [3], we claim that each eigenvalue has algebraic multiplicity .
For the rest of eigenvectors we have . Noting that
by Lemma 2.1 (ii) and by taking the inner product in (2.14) with and using (2.17), we obtain
It is impossible to have , since (2.18) leads to , which contradicts with . We cannot have , since the left-hand side is negative but the right-hand side is positive (because we assume ). Thus, we claim that .
From (1.4), we recall that for any , with . Applying this to (2.18), we have . Since , we have and
Corollary 2.7. Let . Then the corresponding preconditioned matrix has only one eigenvalue with algebraic multiplicity 2m. The remaining eigenvalues satisfy (2.12).
Remark 2.8. From (2.4) we know that when , the new preconditioner reduces to the diagonal preconditioner : Then we can use MINRES to solve the linear system (1.2).
Remark 2.9. From the proof of Theorem 2.6, we easily see that the new preconditioner is also efficient for . Then from (2.12), we conclude that if and , then the closer is to and the closer is to 1; that is, the preconditioned matrix has more tightly clustered eigenvalues. For a general case of , we can only obtain similar results when . The following numerical experiments show that the closer is to , the less iteration counts we have used for a fixed . However, choosing too small may result in too large , then result in ill-conditioning of . So we choose to be moderate size in practice.
3. Numerical Experiments
The test problem is a two-dimensional time-harmonic Maxwell equations in mixed form (1.1) in a square domain . We set the right-hand side function so that the exact solution is given by and .
We consider five uniformly refined meshes, which are constructed by subsequently splitting each triangle into four triangles by joining the midpoints of the edges of the triangle. Two of five mesh grids are depicted in Figures 1 and 2. The lowest order elements are used to discretize equations. The matrix sizes on different meshes are given in Table 1.
Our numerical experiments were performed using MATLAB. The machine is a PC-Intel (R), Pentium(R)Dual CPU E2200 2.20βGHz, 1.00βG of RAM. The purpose of our experiments is to investigate the convergence behavior of preconditioned BiCGSTAB by choosing different parameters and and in the preconditioner . Thus, we apply exact inner solver, and the outer iteration is used as a zero initial guess and stopped when .
From Theorem 2.6 and Corollary 2.7 we know that the preconditioned matrix has one eigenvalue , and the remaining eigenvalues are satisfying (2.12). Figure 3 depicts the eigenvalues of the preconditioned matrix with , where and satisfy , and . From it we observe that the eigenvalue distribution of preconditioned matrix with denoted by solid line is more tightly clustered than with denoted by dotted line. From Remark 2.9 we know that the new preconditioner is also efficient for . Figures 4, 5, 6, and 7 show the eigenvalue distribution of the preconditioned matrix for different with , where and satisfy , and . From Figures 4β7 we know that the closer the parameter is to , the more tightly clustered the eigenvalues of the preconditioned matrix will be.
Table 2 shows the outer iteration counts for different and , applying BiCGSTAB with the block-triangular preconditioner, where and satisfy , and . The iteration counts are denoted by Iter. We observe that for a fixed , the closer is to , the less iteration counts are produced. For comparison, we also give the outer iteration counts for . We refer the definition of to [2]. It shows that the preconditioner is more efficient than .
Tables 3 and 4 show the outer iteration numbers for different meshes, applying BiCGSTAB with the preconditioner , where are set to be and , and . We observe that the outer iteration numbers of the preconditioned BiCGSTAB are hardly sensitive to the changes in the mesh size.
4. Conclusions
We have investigated the use of new block-triangular preconditioners with two variable relaxation parameters for solving the mixed formulation of the time-harmonic Maxwell equations. Our results are extensions of the work in [2, 3, 15]. The preconditioned matrices are demonstrated to have clustering eigenvalues. We have shown experimentally that the outer iteration numbers of BiCGSTAB with the new preconditioner are hardly any sensitive to the changes in the mesh size.
Acknowledgments
The authors thank the anonymous referees for their valuable comments and suggestions which lead to an improved presentation of this paper. This work is supported by the Chinese National Science Foundation Project (11161014), the Science and Technology Development Foundation of Guangxi (Grant no. 0731018), and Innovation Project of Guangxi Graduate Education (Grant no. ZYC0430).