Abstract

An algebraic multigrid (AMG) with aggregation technique to coarsen is applied to construct a better preconditioner for solving Helmholtz equations in this paper. The solution process consists of constructing the preconditioner by AMG and solving the preconditioned Helmholtz problems by Krylov subspace methods. In the setup process of AMG, we employ the double pairwise aggregation (DPA) scheme firstly proposed by Y. Notay (2006) as the coarsening method. We compare it with the smoothed aggregation algebraic multigrid and meanwhile show shifted Laplacian preconditioners. According to numerical results, we find that DPA algorithm is a good choice in AMG for Helmholtz equations in reducing time and memory. Spectral estimation of system preconditioned by the three methods and the influence of second-order and fourth-order accurate discretizations on the three techniques are also considered.

1. Introduction

In this paper, the time-harmonic wave equation in 2D homogeneous media is solved numerically. The essential equation is Helmholtz equation, which governs wave scattering and propagation phenomena arising in acoustic problems in many areas, such as geophysics, aeronautics, and optical problems. In particular, we are in search of solutions of the Helmholtz equation discretized by the finite difference method. The discrete problem becomes extremely large for very high wavenumber, because the number of gridpoints per wavelength should be sufficiently large in order to result in accepted solutions. In this case, direct methods are difficult to solve, and iterative methods are the interesting alternative. However, Krylov subspace iterative methods are not competitive without a good preconditioner. In this paper, we consider an algebraic multigrid with aggregation scheme as preconditioning to improve the convergence of Krylov subspace iterative methods.

In [1], Bayliss et al. proposed a preconditioner based on the Laplace operator for solving the discrete Helmholtz equation efficiently with CGNR. In [2], Laird and Giles proposed a preconditioner where an extra term is added to the Laplace operator for solving the discrete Helmholtz equation. Subsequently, in [3], Erlangga et al. generalized the above two kinds of preconditioners and obtained a new class of preconditioners, the so-called “shifted Laplacican” preconditioner of the form with , where is the imaginary unit. In 2006, Erlangga et al. [4] compared multigrid with incomplete used to approximate the shifted Laplacian preconditioner to construct the final preconditioner for the inhomogeneous Helmholtz equation, and concluded that multigrid applied to approximate the shifted Laplacian preconditioner with Bi-CGSTAB resulted in a fast and robust method. In 2006, Erlangga et al. [5] proposed a multigrid -cycle with de Zeeuw’s prolongation operator, FW (full weighted) restriction, and Jacobi smoothing with the relaxation parameter . The smallest size of the parameter in front of the imaginary Helmholtz term in the preconditioner, for which the multigrid method could be successfully employed, has been determined to and meanwhile . In 2007, Van Gijzen et al. [6] analyzed the spectral of the discrete Helmholtz operator preconditioned with a shifted Laplacian, and proposed an optimal value for the shift by combining the results of the spectral analysis with an upper bound on the GMRES residual norm. In 2009, Umetani et al. [7] proposed a multigrid -cycle with AMG’s prolongation operator, FW restriction, and incomplete postsmoothing for a fourth-order Helmholtz discretization based on [5]. The fourth-order accurate shifted Laplacian preconditioner could be easily approximated by one -cycle of multigrid and enables us to choose a somewhat smaller imaginary shift parameter in the shifted Laplacian preconditioner, which improves the solvers convergence (especially for high wavenumbers on fine meshes). In 2007, Airaksinen et al. [8] proposed a preconditioner based on an algebraic multigrid approximate of the inverse of a shifted Laplacian for the Helmholtz equation. This is a generalization of the preconditioner proposed by Erlangga et al. in [5]. In 2010, Olson and Schroder [9] proposed a smoothed aggregation algebraic multigrid method for 1D and 2D scalar Helmholtz problems with exterior radiation boundary conditions discretized by 1D finite difference and 2D discontinues Galerkin. In the same year, Notay [10] proposed an aggregation-based algebraic multigrid (AGMG) method, in which double pairwise aggregation scheme firstly proposed by Notay in [11] is used.

We will consider using the double pairwise aggregation algorithm to coarsen in the setup process of the general algebraic multigrid method in this paper in order to improve the solution effects of Helmholtz problems.

This paper is organized as follows. In Section 2, we discuss the Helmholtz equation, and the discrete finite difference formulations of second order and fourth order. The iterative solution methods with the three different kinds of preconditionings are presented in Section 3. Numerical results are reported in Section 4, and conclusions are given in Section 5.

2. The Helmholtz Equation and Discretizations

2.1. Mathematical Problem Definition

We solve wave propagations in a two dimensional medium in a unit domain governed by the Helmholtz equation where is the Laplace operator, represents the pressure field in the frequency domain, the source term is denoted by , and is the wavenumber, which is a constant in the homogeneous domain. Otherwise, the wavenumber is space dependent because of a spatially dependent speed of sound in a heterogeneous medium. With angular frequency   ( is the frequency in Hertz), wavelength is defined by . So the wavenumber . The number of wavelengths in a domain of size equals . , the number of points per wavelength, is typically chosen to be 10–12 points. A dimensionless discretization step reads , and therefore . With domain size , an accuracy requirement for second-order discretizations is that for points per wavelength, and with points per wavelength. We select combinations of wavenumber and the discretization step with , which is displayed in Table 1, in the latter numerical experiments.

The boundary condition can be the Dirichlet boundary condition or the first-order radiation boundary condition, and they are, respectively, as follows: where is an outward direction normal to the boundary.

2.2. Finite Difference Discretizations

The Helmholtz equations are discretized either by a second-order or by a fourth-order finite difference scheme, resulting in the linear system where and represent the discrete frequency-domain pressure field and the source, respectively.

The well-known 5-point stencil related to a second-order accurate discretization for Helmholtz equation (2.1) reads

The fourth-order accurate discretization named by HO discretization [12] for Helmholtz equation (2.1) is given with stencil

Compared with a second-order discretization method, a fourth-order discretization method could make the number of grid points per wavelength be smaller and discrete the equations to smaller matrices for the same level of accuracy. It is possible to lead to an algorithm that is more efficient in terms of the accuracy of the solution versus the computational cost. As in [7], the multigrid-based shifted Laplacian preconditioner with the fourth-order discretization is preferable.

The boundary discretization is also considered, and we could employ the first-order forward or backward scheme to approximate the first-order derivative in (2.3).

Next, we can obtain a linear system through substituting where is a large, spare symmetric matrix and is the number of grid points. The matrix remains positive definite as long as is smaller than the minimum eigenvalue of the discrete Laplacian, is indefinite with both positive and negative eigenvalues for large values of , and is complex-valued because of absorbing boundary conditions. However, it is non-Hermitian. Apparently, the size of the linear system (2.7) gets very large for the high frequency.

3. Iterative Solution Methods

3.1. Preconditioned Krylov Subspace Iterative Methods

Iterative methods for linear system (2.7) within the class of Krylov subspace method are based on the construction of iterants in the subspace where is the th Krylov subspace associated with and , , is the initial residual.

The Bi-CGSTAB algorithm [13] is one of the better known Krylov subspace algorithms for non-Hermitian problems, which has been used for Helmholtz problems, for example, in [5, 8]. One of the advantages of Bi-CGSTAB, compared to full GMRES, is its limited memory requirements. Bi-CGSTAB is based on the idea of computing two mutually biorthogonal bases for the Krylov subspaces based on matrix and its conjugate transpose and is easy to implement.

Without a preconditioner, Krylov subspace methods converge very slowly, or not at all, for the problem of interest in [4]. So a preconditioner should be incorporated to improve the convergence of Krylov subspace methods. By left preconditioning, one solves a linear system premultiplied by a preconditioning matrix ,

The challenge is to find a form of matrix , whose inverse matrix can be efficiently approximated, such that has a spectrum that is favorable for Krylov subspace iterative solution methods.

In this paper, we choose Bi-CGSTAB and GMRES with preconditionings to solve discrete Helmholtz equations. In [3], GMRES is used to solve the Helmholtz equation with the shifted Laplacian preconditioner and is compared with Bi-CGSTAB. As a result, Bi-CGSTAB is preferable since the convergence for heterogeneous high wavenumber Helmholtz problems is typically faster than that of GMRES.

3.2. Shifted Laplacian (SL) Preconditioners

In [5], a shifted Laplacian operator was proposed as a preconditioner for the Helmholtz equation, with defined as a discretization of and boundary conditions were set identically to those for the original Helmholtz equation. The readers could refer to the related contents in [4, 7, 8, 14].

There are different choices of the parameter pair such as (Laplace preconditioner in [1]), (Laird and Giles preconditioner in [2]), (Erlangga et al. preconditioner in [3]), (basic parameter pair choice). In [5], Erlangga et al. evaluated the influence of parameters and to the multigrid method which was applied to approximate the shifted Laplacian operator to construct the final preconditioner, and the proposed optimal parameter pair for the solver was that . Based on [5], Umetani et al. [7] considered using the fourth-order accurate finite difference discretization with the result that the parameter pair is the best choice. In this paper, we merely consider the shifted Laplacian preconditioners, without the multigrid or incomplete LU method to approximate, which is not the same as that in [5, 7]. So we will also choose that and for comparison with the previously mentioned parameter pair in the numerical experiments.

3.3. Smoothed Aggregation (SA) Preconditioning

In this section, we will present an overview of the smoothed aggregation setup algorithm. The function of the SA setup phase is to construct coarse operators through interpolation operators where and are sizes of two successively coarser grids. The initial matrix associated with the finest level equals . The inputs need the matrix and one near null-space candidate that is intended to form the interpolation basis. is an algebraically smooth mode that is slow to relax on the fine mesh. The details are described in Algorithm 1.

(1)
(2) While
(3) [neighbour, hoods] = aggregate ;
(4) [ , ] = qr ;
(5) smoother ;
(6) ;
(7) ;
(8) ;
(9) Endwhile

When the current coarse level ( is the coarsest level), we gain two sets respectively named by “neighbour” and “hoods.” The “neighbor []” includes the nodes which are strongly connective to the node , and the “hoods []” contains the nodes which are in the th aggregation set. Next, we reset according to “hoods,” and QR factorization is applied to it in order to get the initial interpolation operator and the next . Third, we obtain the smooth operator according to the relative algorithm in [15]. Fourth, we get the final interpolation operator , through the smooth operator is acted on the initial interpolation operator . Finally, we can easily obtain the coarse operator . The interested readers could consult the related contents in [9]. We will employ the SA setup algorithm in AMG to approximate the original system in order to obtain the preconditioner and we name this process by SA preconditioning.

3.4. Double Pairwise Aggregation (DPA) Preconditioning

An algebraic coarsening algorithm sets up an interpolation operator only making use of the information available in the fine grid matrix . The interpolation operator is a matrix, where is the number of coarse variables. With acted on by the interpolation operator, a vector defined on the coarse variable set could be transferred on the fine grid. The coarsen grid matrix depends not only on the interpolation matrix , but also on the restriction matrix and the fine grid matrix . It is usual to take the restriction operator equals to the transpose of the interpolation operator . Therefore, the coarsen grid matrix is computed from the Galerkin formula

Coarsening by aggregation needs to define aggregates , which are disjoint subsets of the variable set. The number of these aggregations is the number of coarse variables , and the interpolation operator is given by

Note that there is no need to explicitly form interpolation operator , and the coarse grid matrix is practically computed by

The double pairwise aggregation algorithm was first proposed by Notay in [11], which employed two passes of the pairwise aggregation algorithm, with the result of decreasing the number of variables by a factor slightly less than four in most cases. Subsequently, we will take the double pairwise aggregation scheme in the setup process of AMG, in order to construct a better preconditioner with Bi-CGSTAB for discrete Helmholtz equations. We also name this process by DPA preconditioning. The details of aggregation algorithms could be consulted in [10, 11].

4. Experiments

We will consider the following problem and use a uniform mesh with constant mesh size in all directions. In the experiments, we will compare the three preconditionings combined with Bi-CGSTAB or GMRES for solving discrete Helmholtz equations. The shifted Laplacian preconditioner (SLP), the smoothed aggregation preconditioning (SAP), and the double pairwise aggregation preconditioning (DPAP) are, respectively, introduced in Section 3. We let that the iterative method is terminated at the th step if . All the numerical results are from running relative Matlab programs in the computer with the CPU of AMD Pentium Dual Core 4000+.

4.1. The Close-off Problem and Numerical Results

We consider a problem in a rectangular homogeneous medium governed by, at the boundaries.

Different grid resolutions are used to solve this problem with various wavenumbers in Table 1.

Firstly, we will observe the distribution of eigenvalues of the systems preconditioned by the three different preconditionings in order to estimate which preconditioning is best combined with Krylov subspace iterative methods. Since the spectral analysis of shifted Laplacian preconditioners with different parameter pairs was considered in many papers [57, 14] and is an almost better choice, we will only present the distribution of eigenvalues of the system preconditioned by the shifted Laplacian preconditioner with with second-order and fourth-order accurate discretizations. We employ two-gird process to approximate -cycle of AMG, and so , where is unit matrix and is smoothing matrix. Second-order and fourth-order accurate discretizations are taken into account. Next, we note preconditioners produced by the smoothed aggregation preconditioning and by the double pairwise aggregation preconditioning separately as and . In Figures 1(a), 1(b), and 1(c) are, respectively, spectral pictures of , and with second-order accurate discretization, and Figures 1(d), 1(e), and 1(f) are respectively spectral pictures of , , and with fourth-order accurate discretization. Meanwhile, we compute the condition number of the preconditioned systems, and , , and separately correspond to Figures 1(a), 1(b), and 1(c).

Next, we will present numerical results in Tables 2, 3, and 4 and explain the meaning of signs in them. Tables 2, 3, and 4, respectively, show the computational performance of GMRES with SLPs, Bi-CGSTAB with SAP, and Bi-CGSTAB with DPAP for solving the closed-off problem with finite difference discretizations, in terms of the number of iterations and computational time, to reach the specified convergence. We also test Bi-CGSTAB with SLPs, but it leads to more iterations and solve time. So we do not present the corresponding results. Meanwhile, we find that Bi-CGSTAB with SLPs when and in the interval is better than GMRES, and the fourth-order accurate discretization could make the iterative solution rate more quick. Nevertheless, we make the focus only on the second-order accurate discretization for shifted Laplacian preconditioners, so the second-order accurate discretization is used in Table 2. “Iter” denotes the number of iterations, and “—” means that the corresponding experimental result is not given. “_setup (s),” and “_solve (s)” respectively, denote the setup time of preconditioner and the time of iterative solution by the second, and “_” and “_,” respectively, mean the second-order and fourth-order finite difference discretizations shown in Section 2. The values of “presmooth” and “postsmooth” respectively, denote the number of pre- and postsmoothing iterations in the solve process of AMG, “smoother” denotes the chosen smoothing method, and “setup_Alg” denotes the coarsening algorithm used in the setup process of AMG.

4.2. Analysis of Numerical Results

We will analyze the numerical results from the following aspects.

Firstly, from Figure 1, by comparing pictures (a), (b), (c), we find that the eigenvalues of (b) and (c) are more farther from zero point and more close with one, but the eigenvalues of (b) contain zero point. So we could boldly deduce that DPAP and SAP are more beneficial for Krylov subspace iterative method. Perhaps DPAP is better than SAP since the eigenvalues of (b) contain zero point. Next, we contrast (a), (b), (c), respectively, with (d), (e), (f). We notice that the higher accurate discretization is possible to improve the solution efficiency of SLP, since the fourth-order discretization makes eigenvalues more close to one in (d), is probably good for SAP for avoiding the zero point eigenvalue, and is likely bad for DPAP because of appearing the zero point eigenvalue. As we all know, the larger the condition number of matrix is, the worse the numerical stability of solution of the corresponding matrix equation is. So the system preconditioned by the shifted Laplacian operator has worse numerical stability, which means that the small change of the right hand will bring large change of solution, since is much larger. It is perhaps the reason for using multigrid or LU to approximate the shifted Laplacian operator to obtain final preconditioner, such as [4, 5, 7, 8].

Secondly, from Table 2, we could easily see that the number and time of iterative solution are both going up largely with the increasing wavenumber. The shifted Laplacian preconditioner degenerates into the Laplace preconditioner when . Next, we find out that the number of iterations is smallest when , but it is not best in terms of the time of iterative solution. In general, the best choice is that by considering both aspects of the number and time of iterative solution.

Thirdly, from Table 3, we find that the fourth-order accurate discretization only affects the iterative solution a little, and it is little better than the second-order accurate discretization when the wavenumber . However, only the low wavenumbers are computed in Table 3 because of the too long time of iterative solution or the inadequate memory for higher wavenumbers. From Table 4, we discover that the fourth-order accurate discretization does not evidently improve the efficiency of iterative solution and that the second-order accurate discretization looks more stable in the aspect of the number of iterations. The fourth-order accurate discretization improves the efficiency of iterative solution both on the sides of setup-time and solve time when the wavenumber but costs more time and iterations for the higher wavenumber . So we could conclude that the second-order accurate discretization combined with DPAP is preferable.

Finally, to compare results of Tables 2, 3, and 4, according to the number of iterations, we can easily find that Bi-CGSTAB with DPAP and second-order accurate discretization is preferable. However, presmooth is chosen in Table 3, and, when we take presmooth for Bi-CGSTAB with SAP, the iterative steps will be smaller but still not better than Bi-CGSTAB with DPAP, with leading to the longer solve time for the added smoothing process. Following, we contrast solve time and setup time with second-order accurate discretization in Tables 2, 3, and 4. Then, we see that GMRES with SLP with need the least solve-time for wavenumber , Bi-CGSTAB with SAP has the least time of iterative solution for wavenumber , and Bi-CGSTAB with DPAP has the quickest iterative rate for wavenumber . SAP has much more setup time than DPAP for the same wavenumber, and setup time of SAP increases more rapidly than that of DPAP. Though setup time will lead to the whole solution time longer, if there are some matrix equations with the same left matrix and the different right hands, DPAP will excel for higher wavenumber because of needing the setup process once. The results of preconditioning for wavenumber are only shown in Table 4, and the reason is that the larger the wavenumber is, the longer the solution time is for higher wavenumber. So we do not present the results for the higher wavenumber in Tables 2 and 3 due to the limited experiment condition. Next, we observe that, with the increasing wavenumber , the number of iterative solution changes between linearly and squarely in Table 2, but it fluctuates between 4 and 7 in Table 3 and between 2 and 4 in Table 4. In other words, the number of iterative solution is not related to the wavenumber when Bi-CGSTAB with DPAP and SAP is employed. In contrast with Bi-CGSTAB with SAP in Table 3, Bi-CGSTAB with DPAP in Table 4 makes the time of iterative solution change more slowly while the wavenumber is increasing.

5. Conclusions

As is known to all, Helmholtz equations are difficult to solve, especially with high wavenumbers. The reason is that the high wavenumber is able to make the corresponding matrices of Helmholtz equations highly indefinite. The original Krylov subspace iterative method is not efficient to solve these problems. Preconditioning is needed to apply into Krylov subspace iterative methods in order to improve the efficiency of solution. We consider using the algebraic multigrid method to construct the preconditioner and look for a concrete AMG method which is fit for Helmholtz problems. According to numerical results and analysis in Section 4, the double pairwise aggregation is applied in the setup process of AMG and finally better results are gained. Meanwhile, the number of iterative solution is independent on the wavenumber, with the result that Helmholtz equations with high wavenumbers can be solved more efficiently in terms of time and memory. The methods that are Bi-CGSTAB with DPAP and with SAP get fewer number of iterative solution however, SAP prolongs the setup time in contrast with DPAP. The two kinds of preconditionings are obviously more efficient than the shifted Laplacian preconditioners. In view of the above, we can draw the conclusion that the double pair aggregation algorithm in the setup process of AMG is a good choice to solve Helmholtz equations especially with high wavenumbers. Though only the 2 D close-off problem with Dirichlet boundary condition is experimented in Section 4, this conclusion is also possible to generalize to Helmholtz equations with other boundary conditions.

Acknowledgments

This research is supported by NSFC (60973015, 61170311), Sichuan Province Sci. and Tech. Research Project (2011JY0002, 12ZC1802), the tianyuan foundation. Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020).