International Scholarly Research Notices

International Scholarly Research Notices / 2014 / Article

Research Article | Open Access

Volume 2014 |Article ID 745849 | 15 pages | https://doi.org/10.1155/2014/745849

Preconditioned Krylov Subspace Methods for Sixth Order Compact Approximations of the Helmholtz Equation

Academic Editor: L. Hajdu
Received05 Aug 2013
Accepted10 Nov 2013
Published21 Jan 2014

Abstract

We consider an efficient iterative approach to the solution of the discrete Helmholtz equation with Dirichlet, Neumann, and Sommerfeld-like boundary conditions based on a compact sixth order approximation scheme and lower order preconditioned Krylov subspace methodology. The resulting systems of finite-difference equations are solved by different preconditioned Krylov subspace-based methods. In the analysis of the lower order preconditioning developed here, we introduce the term “kth order preconditioned matrix” in addition to the commonly used “an optimal preconditioner.” The necessity of the new criterion is justified by the fact that the condition number of the preconditioned matrix in some of our test problems improves with the decrease of the grid step size. In a simple 1D case, we are able to prove this analytically. This new parameter could serve as a guide in the construction of new preconditioners. The lower order direct preconditioner used in our algorithms is based on a combination of the separation of variables technique and fast Fourier transform (FFT) type methods. The resulting numerical methods allow efficient implementation on parallel computers. Numerical results confirm the high efficiency of the proposed iterative approach.

1. Introduction

In recent years, the problem of increasing the resolution of existing numerical solvers has become an urgent task in many areas of science and engineering. Most of the existing efficient solvers for structured matrices were developed for lower-order approximations of partial differential equations. The need for improved accuracy of underlying algorithms leads to modified discretized systems and as a result to the modification of the numerical solvers (see, e.g., [1]).

The use of a lower-order preconditioner for efficient implementation of high-order finite-difference and finite-element schemes has been under consideration for a long time (see, e.g., [2, 3]). In this paper, a compact sixth order approximation finite-difference scheme is developed, and a lower-order approximation direct solver as a preconditioner for an efficient implementation of this compact scheme for the Helmholtz equation in the Krylov subspace method framework is considered. This approach allows us to utilize the existing lower-order approximation solvers which significantly simplifies the implementation process of the higher resolution numerical methods.

The model problem considered in the paper is the numerical solution of the Helmholtz equation with the Dirichlet, Neumann, and/or Sommerfeld-like (radiation) boundary conditions where , is a complex valued constant, and , , and are different boundary sides of the rectangular computational domain .

It is known that for a given error level in the numerical approximation to the solution of the Helmholtz equation, the quantity needs to be constant, where is the order of finite-difference scheme and is the grid size. This phenomenon is known as “pollution” [4, 5]. One way of reducing the pollution error is to increase the order of accuracy of the scheme. In this paper a sixth order compact finite-difference scheme is considered to address this problem. In the cases of the 3D Helmholtz and Dirichlet boundary conditions, this scheme was proposed by Sutmann in [6]. The 2D version of this scheme with Dirichlet and Neumann boundary conditions was developed in [7, 8]. In this paper, the method is extended to the explicit compact sixth order approximation of Neumann and Sommerfeld-like boundary conditions in the 3D case. The extension of the known approach for the Neumann boundary conditions is straightforward. But in the case of the Sommerfeld-like boundary conditions the method is nontrivial and requires the introduction of a new auxiliary function. In the case of the variable coefficient Helmholtz equation and Sommerfeld-type boundary conditions, the fourth order compact approximation scheme was considered in [9].

The resulting discretization leads to a system of linear equations with block 27-diagonal structure. In general, the matrix of this system is neither positive definite nor Hermitian. Hence, most iterative methods either fail to converge or converge too slowly, which is impractical. For the solution of this system we propose using a combination of Krylov subspace-based methods and the FFT preconditioner. Concerning other approaches for the solution of the problem (1)-(2), we refer to Bayliss et al. [10] for a preconditioned conjugate-gradient algorithm, Kim [11] for a domain decomposition method, and Douglas et al. [12] for an ADI algorithm. A very efficient multigrid method based on a shifted-Laplace preconditioner for the Helmholtz equation is presented in [13]. The analysis of the multigrid algorithms in the case of nonsymmetric and indefinite problems can be found in [14].

On the other hand, the solution of this problem by a direct method based on Gaussian elimination requires a prohibitive amount of additional storage and computer time and thus has limited use. The most promising results in the solution of a similar problem have been obtained by preconditioned Krylov subspace methods in [15]. In this paper we generalize some approaches developed for the second-order central difference discretization of the Helmholtz equation by Elman and O’Leary [16, 17], the author, and others [18] to the case of compact sixth order approximation scheme.

The key to the fast convergence of the suggested iterative method is the choice of the preconditioning matrix. In our algorithm we use a preconditioner based on the second order central difference approximation of the Helmholtz equation and corresponding boundary conditions. The inversion of the preconditioning matrix at each step of the Krylov subspace method is done by a direct solver based on the FFT technique (see, e.g., [19]) which requires operations, where is the number of grid points in each direction.

Numerical experiments with test problems demonstrate the high resolution of the sixth order compact finite-difference scheme, as well as the computational efficiency of our preconditioned Krylov subspace-type numerical method. In most situations, the GMRES method demonstrates the best convergence properties. However, in the case of sufficiently fine grids, we propose using a simple stationary two-level method (see, e.g., [20]). This method was naturally constructed in the analysis of the convergence of the GMRES algorithm. So, the choice of the parameters in this method is not based on the minimization of the spectral radius of the iteration matrix on each step but rather on the construction of a particular linear combination from a Krylov subspace. This is why, for the purpose of this paper, we took the liberty of calling this approach the simplified Krylov subspace (SKS) method. On sufficiently fine grids this method requires much less processor time than the GMRES method, though the number of iterations until convergence is still larger than in the case of the GMRES method as should be expected. Also, since the implementation of SKS algorithm does not require the calculation of scalar products, it has greater potential than the GMRES method for implementation on parallel computers. A distinguishing feature of these approaches is that the number of iterations required for the convergence of Krylov subspace iterations decreases as the size of the discretized system increases. To explain this fact we introduce “the order of the preconditioned matrix” as a parameter to quantify the rate of convergence of the condition number of the preconditioned matrix to 1 on a sequence of grids. In some simple situation, we were able to find this parameter analytically. We believe that this parameter is more informative than the commonly used ‘‘an optimal preconditioner” (see, e.g., [21, page 196]) in the case of lower-order preconditioners and may be used as guide in the further development of preconditioners of similar type. But we must notice that even in this paper this parameter has limited application.

This conclusion is based on the theoretical analysis of some simple situations and confirmed in our numerical experiments. Some preliminary numerical results on the application of this approach were presented at the 10th International Conference on Mathematical and Numerical Aspects of Waves, Vancouver, Canada, 24–29 July, 2011 [22].

The rest of the paper is organized as follows. In Section 2, the main idea of the proposed sixth order approximation compact method is presented in the case of the 1D problem. To analyze the convergence of the algorithms developed here, variations of known convergence estimates for the Krylov subspace methods are considered. In this section, simplified approaches based on the Krylov subspace iterations are also presented. Section 3 focuses on the development of the sixth order compact approximation scheme in the case of Neumann and Sommerfeld-like boundary conditions. In Section 4, the effectiveness of the proposed algorithms is demonstrated on a series of test problems.

2. A One-Dimensional Model Problem

Let and be matrices derived from six and second order approximations to (1) and the Dirichlet boundary condition (2) using a mesh , and . The standard notation for the first and second order central differences at th grid point is given by where . The finite difference operators , , , and used in the following sections are defined similarly. The sixth order approximation to the second derivative at the th grid point can be written as As usual, in the case of compact schemes, we need to use the original equation to find appropriate relations to eliminate the fourth and sixth derivatives in (4). By using the second order central difference of the fourth derivative of at and the expression for the fourth derivative of from (1) the relation (4) can be expressed in the form where . After using (7), the discretized system corresponding to the compact sixth order approximation scheme for (1)-(2) can be presented as where is the sixth order discrete approximation to . This system can also be rewritten in the form , where In a similar way the preconditioning matrix based on the second order central difference approximation can be presented as Finally the right preconditioned system can be written as

The main goal of this paper is to demonstrate the computational efficiency of a preconditioning technique based on the use of a lower-order approximation discrete system in the numerical implementation of higher order compact finite-difference schemes. To consider this construction in a more general form, we introduce the definition of a th order preconditioned system.

Definition 1. Given two nonsingular matrices and , a system in the form of (12) is said to be a th order preconditioned system if the matrix is diagonalizable and can be expressed as , where ; with , where does not depend on and is an matrix of eigenvectors of .

Using this definition we present the convergence analysis of the GMRES method applied to the system (12) in the following theorem.

Theorem 2. Let a system in the form of (12) be a th order preconditioned system. Then the th iteration of the GMRES method applied to this system satisfies the convergence estimate where and .

Proof. Since the matrix is diagonalizable, it is well known (see, e.g., [23, 24]) that the residual of th iteration of GMRES algorithm satisfies Consider the polynomial of th degree , where and the coefficients satisfy . It is easy to see that the coefficients satisfy and the unique solution of this system is given by . Then the convergence estimation for the GMRES method becomes

The proof is somehow trivial and could be significantly simplified but we present it in such a form to use later in the construction of the simplified iterative method based on this proof.

In the same way we can derive another useful estimate expressed in the following corollary.

Corollary 3. Let the matrix in the system (12) be expressible as , where , , and is the matrix of eigenvectors of . Then the th iteration of the preconditioned GMRES method applied to this system satisfies the convergence estimate

We can derive another well-known estimate based on a standard application of Chebyshev polynomials (see, e.g., [24]). We consider only the case of real valued matrices. The following theorem provides the details of the proposed technique.

Theorem 4. Let the matrix in the system (12) be diagonalizable and expressible in the form , where , , and is the matrix of eigenvectors of . Then the th iteration of the preconditioned GMRES method applied to this system satisfies the convergence estimate

Proof. This estimation immediately follows from the general complex valued matrix result (see, e.g., [24, page 206]).

Next we will apply the developed convergence estimations to the analysis of the solution of the system (12).

Theorem 5. Given , let matrices and be defined by (9) and (11) with and , where is independent of . Then is a second order preconditioned system and one has the convergence estimate

Proof. The matrices and have the same set of eigenvectors as the matrix from (10). The eigenvectors are given by (see, e.g., [17])
The matrix is unitary, so . The eigenvalues of the matrices and are given by respectively. We can write the eigenvalues of the preconditioned matrix in the form The second term in the right-hand side can be estimated using , where and . So is a second order preconditioned matrix and using (13) we obtain the estimate stated in the theorem.

The first condition of this theorem is a common natural restriction that comes to play even when one wants to visualize a numerical solution for qualitative analysis. It is just not an accurate representation of the solution if there are fewer than 5 points per half wavelength. Moreover, to avoid the so-called “pollution” phenomenon Bayliss et al. [5] introduced the restriction that should be constant; that is, to maintain a fixed accuracy of a scheme, the number of grid points must grow as , where is the accuracy order of the scheme. So, if one satisfies the restriction that avoids “pollution” phenomenon, the first condition of the theorem is satisfied almost automatically and it is not much of a restriction at all.

The second condition of the theorem is a requirement that avoids the discrete spectrum of the operator. Since the resolvent set of a bounded linear operator is open, we can always do this by introducing a small change to if necessary.

Next, we present how we can choose in some important cases.

In the simplest case of , we can take .

Consider how to get an estimate for in the case of a given and a sequence of grids with the grid sizes . The first restriction of the theorem gives us the condition . We can see that if the in the argument of was to take on values of all real numbers from to , then the minimum of the expression under consideration would be zero.

Now let us denote and consider for which the expression is zero if . We consider the worst-case scenario to justify this approach for all . It is trivial to find that . So, it is clear that the minimum of the function occurs either at or at since is increasing on the interval . There exists a neighborhood of which includes both and such that which allows us to use the Taylor series to represent in this neighborhood. We use only the first two terms in the series; that is, , . Now we can substitute this expression So, we just need to avoid the values for and for which one of the terms in the brackets is nonpositive. In some situations we would need to choose a sufficiently small grid size step for the coarsest grid in a sequence. For example, in the cases and , . This is independent of and can be used in our convergence analysis on a sequence of grids.

It follows from (19) that the number of iterations for the Krylov subspace algorithm decreases as the grid size decreases. This result proves that the algorithm developed here yields a very effective iterative technique for the implementation of a higher order approximation scheme. Later we will consider the extension of this method to 3D problems.

The proof of Theorem 2 suggests a simplified version of the Krylov subspace method for the solution of (12). Indeed, using the expression for the exact solution of the system (15), we have , for and the following recurrence expression for This allows us to derive the SKS algorithm described in the table labelled Algorithm 1.

(1) Let be the initial approximation, let the tolerance be ,
and let the maximum number of iterations be
(2) , and
(3)
(4) while   and   do
(5) 
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
(12) end while
(13) is the iterative solution of .

This is a simple stationary two-level method (see, e.g., [20]). However, the choice of the parameters in this algorithm is not based on the minimization of the spectral radius of the iteration matrix on each step but rather on the construction of a particular linear combination of vectors from a Krylov subspace. This is why, for the purpose of this paper, we call this approach the simplified Krylov subspace (SKS) method. The proposed method does not require an orthogonalization procedure; that is, it requires no inner products, which is attractive for an implementation in a parallel computing environment. In addition, this method does not use estimates of maximum and minimum eigenvalues of the preconditioned matrix which is the drawback of the Chebyshev acceleration algorithm (see, e.g., [24]). The Chebyshev acceleration method is naturally constructed in the proof of Theorem 4 and we will compare the numerical effectiveness of the methods in Section 4. We should notice that in most numerical experiments, the GMRES method exhibits the best convergence properties, but in some situations the SKS algorithm proves to be more efficient. Some such examples are considered in Section 4.

The SKS algorithm also gives a good criterion for evaluating the quality of a preconditioner in the form of the order of preconditioned matrix (12). In this paper we will calculate this as follows. Consider two grids with the grid sizes and , where . Let the -norm of the residuals of the SKS method on the first two iterations be and on the first grid and and on the second grid. Let and . Now we can approximately calculate the order of preconditioned matrix by We consider this parameter in the discussion of the results of the numerical experiments.

3. Three-Dimensional Problems

3.1. A Sixth Order Approximation Compact Scheme

In this section we present a three-dimensional compact sixth order approximation finite-difference scheme that was first introduced in [6], and we develop a sixth order compact explicit approximation of the Neumann and Sommerfeld-like boundary conditions (2). In this discussion we consider a uniform grid .

First, we consider a finite-difference approximation of (1) in the form where and and

Using the appropriate derivatives of (1) we can write the sixth order compact approximation of the Helmholtz equation in the form For the two-dimensional case this scheme was proposed in [7]. In the three-dimensional case with Dirichlet boundary conditions, it was developed in [6]. We will consider the iterative implementation of this scheme based on preconditioned Krylov subspace-type algorithms.

3.2. Boundary Conditions

The implementation of the Dirichlet boundary conditions (2) is straightforward but the explicit compact approximation of Neumann and Sommerfeld-like boundary conditions requires careful consideration. The sixth order compact approximation of the Neumann boundary condition in the two-dimensional case was considered in [7]. Here we extend this approach to the three-dimensional problem.

Neumann Boundary Conditions. In this subsection the Neumann boundary conditions are considered in the form

We restrict our consideration to only one side of the computational domain . By using Taylor series, we can derive the sixth order approximation formula To express the third and fifth derivatives in (30) we can differentiate the original Helmholtz equation (1), assuming sufficient smoothness of the solution and the right-hand side. After substitution of these derivatives into (30), the resulting expression is

Next, we approximate the third order mixed derivatives in the second bracket of (31) by a fourth order approximation formula and in the second line using a second order approximation scheme, which yields Now we can simplify this and write

Finally, to complete the explicit scheme for the boundary conditions, we present the previous equation in the form Now by using (3) and replacing with , we can write

This formula allows us to eliminate the term in (28). The explicit implementation of the Neumann boundary conditions on the other boundary pieces can be conducted in a similar way.

Sommerfeld-Like Boundary Conditions. In this section we consider the implementation of the Sommerfeld-like boundary conditions (2) which are often used in scattering problems The difficulty in using similar methods to those described in the previous section can be seen in (35). The last term in this equation requires calculation of the fourth derivative of the right-hand side of (29) and if it depends on an unknown variable , the use of a compact sixth order approximation scheme becomes problematic. To avoid such a difficulty, we introduce a new variable . Then the Helmholtz equation becomes where . Now (36) can be rewritten in the form The sixth order approximation of (38) becomes Assuming sufficient smoothness of and and using (37) and (38), we can express the third and fifth derivatives of in the form

To preserve the sixth order compact approximation in (39) we need to use the fourth order approximation for in (40). The second order approximation is sufficient for the derivatives of in (41). First, let us consider the fourth finite-difference compact approximation for . It can be presented in the form

Now we can use (38), (42) and the second order approximation of (41) to express (39) in the form

This equation gives an implicit compact sixth order approximation for (38). Next, the explicit implementation of the boundary conditions (38) similar to (35) is developed by using

Here, and are parameters. We also use and . We leave the first of these derivatives in the right-hand side but drop the latter two derivatives. Now, by using the second order finite-difference approximation in (44) and adding it to the (43), we derive the formula

Next we choose parameters , , and to match the coefficients in (28). This choice is determined by the conditions

Using these parameters and replacing with , we write

This equation provides an explicit compact sixth order approximation for the boundary condition (36). At the upper boundary , the Sommerfeld-like boundary condition can be written as and the auxiliary substitution becomes . The rest of the derivation is very similar to the derivation for the case of the lower boundary. Implementations of the sixth order compact approximation of the boundary conditions in the other directions are also similar to the calculations already presented.

3.3. Fast Fourier Preconditioner

The main idea of this paper is to utilize the existing lower-order approximation direct solvers as preconditioners in the implementation of a higher resolution scheme. One of the most popular methods for the approximate solution of the Helmholtz equation is the second order central difference scheme. We consider the cases with Dirichlet or Neumann boundary conditions on the sides of the computational domain ; that is, or . At the top and the bottom of the computational domain, we assign one of the three boundary conditions under consideration: Dirichlet, Neumann, or Sommerfeld-like boundary conditions (2). In the case of Neumann and Sommerfeld-like boundary conditions, we use the second order central finite-difference approximation of the first derivative on all boundaries. To express the preconditioner in general form we introduce the matrices and , where , , and are parameters depending on boundary conditions. Next let and . Using Kronecker products, the preconditioning matrix can now be written in the form where in the case of Dirichlet boundary conditions at the sides of the computational domain and in the case of Neumann boundary conditions on the same boundaries. The number of grid points in the -direction is chosen in such a way that the grid step size is the same in all three directions. This condition is imposed so that the lower-order preconditioner and the sixth order approximation compact scheme developed in the previous section use the same grid points. Depending on the boundary conditions at the top and the bottom of the computational domain, the parameters and are defined in the following way: in the case of Dirichlet boundary conditions and ; in the case of Neumann boundary condition and ; and in the case of the Sommerfeld-like boundary conditions and . At each step of the iterative process, the solution to the second equation in (12) with the matrix (49) can be obtained by using FFT-type algorithms in operations.

3.4. Convergence Analysis of 3D Algorithm

In this section, the convergence of the proposed algorithms is considered in the case of the 3D Helmholtz equation with the Dirichlet boundary conditions (1) and (2) on the rectangular computational domain with uniform grid size . To insure the uniqueness of the solution to the original boundary value problem, we assume that the coefficient is in the resolvent set of the Laplace operator defined on an appropriate function space. We also impose a common restriction on the number of grid points per wavelength; that is, . The convergence analysis of the proposed algorithms follows the same lines as in Theorem 5. The preconditioning matrix is given by (49) with and . To express the right-hand side of the sixth order approximation scheme given in (28), we will use the notation introduced in (49). In addition, we define the matrix . Then the resulting matrix can be written as

Both matrices and have the same set of orthonormal eigenvectors (see, e.g., [17]):

The matrix of eigenvectors is unitary so the -norm of and is one. To present the eigenvalues of the matrices, we introduce the notation . Then the eigenvalues of the matrices and in the 3D case are given by

Unfortunately, the eigenvalues of these matrices do not satisfy the hypotheses of Theorem 2, but we still can apply Corollary 3 to prove the convergence of the GMRES method and the SKS algorithm by showing that , where , for all in some simple cases.

Let us assume that ; that is, and . Then , for . It is also easy to see that and for the given values of . Now the eigenvalues of the preconditioned matrix can be written in the form

This gives us a presentation of the eigenvalues of the preconditioned matrix in the form , where , for all . Now we can apply Corollary 3 and obtain an estimate for the rate of convergence of the GMRES and SKS algorithms: where .

Similar to the 1D case, it is possible to show that this estimate holds for and a sequence of grids with the grid sizes if for some and sufficiently small . As in the case of the 1D problem if follows from the fact that the resolvent set of a bounded linear operator is open. The proof is somewhat tedious but elementary, so we skip it. In Table 1 we present lower and upper bounds and for for different values of and for different grid step sizes. Also, the last two rows of Table 1 possible choices for and for different .



8.00 4.00 1.80 1.12 2.9
1/64 1/128 1/256 1/256 1/512

This result is weaker than in the 1D case; that is, it says that convergence is independent of the grid step, but it does not improve with the decrease of . In this case, is said to be “an optimal preconditioner” (see, e.g., [21, page 196]). This situation is typical for multigrid-type algorithms but it is not what we would expect based on 1D example. In the following section we present the results of numerical experiments in the case of several test problems which confirm that actual convergence of the presented methods accelerate with the decrease of the grid size; that is, the convergence in the numerical experiments significantly exceeds the estimate (54).

4. Numerical Results

In this section we present the results of numerical experiments which demonstrate the quality of the proposed numerical methods. These algorithms were implemented in Matlab 7.11.0 on an iMac with an Intel Core i7, 2.93 GHz processor. We also use the standard programing implementation of the GMRES method in Matlab (gmres function).

4.1. 1D Test Problem

In the first series of numerical experiments, we consider the convergence of the compact sixth order approximation algorithm on a sequence of grids. In these tests we focus on the numerical solution of the 1D Dirichlet problem for the Helmholtz equation with zero boundary conditions on the interval and . The computational grid in our experiments is defined by