Abstract

We consider an efficient iterative approach to the solution of the discrete Helmholtz equation with Dirichlet, Neumann, and Sommerfeld-like boundary conditions based on a compact sixth order approximation scheme and lower order preconditioned Krylov subspace methodology. The resulting systems of finite-difference equations are solved by different preconditioned Krylov subspace-based methods. In the analysis of the lower order preconditioning developed here, we introduce the term “kth order preconditioned matrix” in addition to the commonly used “an optimal preconditioner.” The necessity of the new criterion is justified by the fact that the condition number of the preconditioned matrix in some of our test problems improves with the decrease of the grid step size. In a simple 1D case, we are able to prove this analytically. This new parameter could serve as a guide in the construction of new preconditioners. The lower order direct preconditioner used in our algorithms is based on a combination of the separation of variables technique and fast Fourier transform (FFT) type methods. The resulting numerical methods allow efficient implementation on parallel computers. Numerical results confirm the high efficiency of the proposed iterative approach.

1. Introduction

In recent years, the problem of increasing the resolution of existing numerical solvers has become an urgent task in many areas of science and engineering. Most of the existing efficient solvers for structured matrices were developed for lower-order approximations of partial differential equations. The need for improved accuracy of underlying algorithms leads to modified discretized systems and as a result to the modification of the numerical solvers (see, e.g., [1]).

The use of a lower-order preconditioner for efficient implementation of high-order finite-difference and finite-element schemes has been under consideration for a long time (see, e.g., [2, 3]). In this paper, a compact sixth order approximation finite-difference scheme is developed, and a lower-order approximation direct solver as a preconditioner for an efficient implementation of this compact scheme for the Helmholtz equation in the Krylov subspace method framework is considered. This approach allows us to utilize the existing lower-order approximation solvers which significantly simplifies the implementation process of the higher resolution numerical methods.

The model problem considered in the paper is the numerical solution of the Helmholtz equation with the Dirichlet, Neumann, and/or Sommerfeld-like (radiation) boundary conditions where , is a complex valued constant, and , , and are different boundary sides of the rectangular computational domain .

It is known that for a given error level in the numerical approximation to the solution of the Helmholtz equation, the quantity needs to be constant, where is the order of finite-difference scheme and is the grid size. This phenomenon is known as “pollution” [4, 5]. One way of reducing the pollution error is to increase the order of accuracy of the scheme. In this paper a sixth order compact finite-difference scheme is considered to address this problem. In the cases of the 3D Helmholtz and Dirichlet boundary conditions, this scheme was proposed by Sutmann in [6]. The 2D version of this scheme with Dirichlet and Neumann boundary conditions was developed in [7, 8]. In this paper, the method is extended to the explicit compact sixth order approximation of Neumann and Sommerfeld-like boundary conditions in the 3D case. The extension of the known approach for the Neumann boundary conditions is straightforward. But in the case of the Sommerfeld-like boundary conditions the method is nontrivial and requires the introduction of a new auxiliary function. In the case of the variable coefficient Helmholtz equation and Sommerfeld-type boundary conditions, the fourth order compact approximation scheme was considered in [9].

The resulting discretization leads to a system of linear equations with block 27-diagonal structure. In general, the matrix of this system is neither positive definite nor Hermitian. Hence, most iterative methods either fail to converge or converge too slowly, which is impractical. For the solution of this system we propose using a combination of Krylov subspace-based methods and the FFT preconditioner. Concerning other approaches for the solution of the problem (1)-(2), we refer to Bayliss et al. [10] for a preconditioned conjugate-gradient algorithm, Kim [11] for a domain decomposition method, and Douglas et al. [12] for an ADI algorithm. A very efficient multigrid method based on a shifted-Laplace preconditioner for the Helmholtz equation is presented in [13]. The analysis of the multigrid algorithms in the case of nonsymmetric and indefinite problems can be found in [14].

On the other hand, the solution of this problem by a direct method based on Gaussian elimination requires a prohibitive amount of additional storage and computer time and thus has limited use. The most promising results in the solution of a similar problem have been obtained by preconditioned Krylov subspace methods in [15]. In this paper we generalize some approaches developed for the second-order central difference discretization of the Helmholtz equation by Elman and O’Leary [16, 17], the author, and others [18] to the case of compact sixth order approximation scheme.

The key to the fast convergence of the suggested iterative method is the choice of the preconditioning matrix. In our algorithm we use a preconditioner based on the second order central difference approximation of the Helmholtz equation and corresponding boundary conditions. The inversion of the preconditioning matrix at each step of the Krylov subspace method is done by a direct solver based on the FFT technique (see, e.g., [19]) which requires operations, where is the number of grid points in each direction.

Numerical experiments with test problems demonstrate the high resolution of the sixth order compact finite-difference scheme, as well as the computational efficiency of our preconditioned Krylov subspace-type numerical method. In most situations, the GMRES method demonstrates the best convergence properties. However, in the case of sufficiently fine grids, we propose using a simple stationary two-level method (see, e.g., [20]). This method was naturally constructed in the analysis of the convergence of the GMRES algorithm. So, the choice of the parameters in this method is not based on the minimization of the spectral radius of the iteration matrix on each step but rather on the construction of a particular linear combination from a Krylov subspace. This is why, for the purpose of this paper, we took the liberty of calling this approach the simplified Krylov subspace (SKS) method. On sufficiently fine grids this method requires much less processor time than the GMRES method, though the number of iterations until convergence is still larger than in the case of the GMRES method as should be expected. Also, since the implementation of SKS algorithm does not require the calculation of scalar products, it has greater potential than the GMRES method for implementation on parallel computers. A distinguishing feature of these approaches is that the number of iterations required for the convergence of Krylov subspace iterations decreases as the size of the discretized system increases. To explain this fact we introduce “the order of the preconditioned matrix” as a parameter to quantify the rate of convergence of the condition number of the preconditioned matrix to 1 on a sequence of grids. In some simple situation, we were able to find this parameter analytically. We believe that this parameter is more informative than the commonly used ‘‘an optimal preconditioner” (see, e.g., [21, page 196]) in the case of lower-order preconditioners and may be used as guide in the further development of preconditioners of similar type. But we must notice that even in this paper this parameter has limited application.

This conclusion is based on the theoretical analysis of some simple situations and confirmed in our numerical experiments. Some preliminary numerical results on the application of this approach were presented at the 10th International Conference on Mathematical and Numerical Aspects of Waves, Vancouver, Canada, 24–29 July, 2011 [22].

The rest of the paper is organized as follows. In Section 2, the main idea of the proposed sixth order approximation compact method is presented in the case of the 1D problem. To analyze the convergence of the algorithms developed here, variations of known convergence estimates for the Krylov subspace methods are considered. In this section, simplified approaches based on the Krylov subspace iterations are also presented. Section 3 focuses on the development of the sixth order compact approximation scheme in the case of Neumann and Sommerfeld-like boundary conditions. In Section 4, the effectiveness of the proposed algorithms is demonstrated on a series of test problems.

2. A One-Dimensional Model Problem

Let and be matrices derived from six and second order approximations to (1) and the Dirichlet boundary condition (2) using a mesh , and . The standard notation for the first and second order central differences at th grid point is given by where . The finite difference operators , , , and used in the following sections are defined similarly. The sixth order approximation to the second derivative at the th grid point can be written as As usual, in the case of compact schemes, we need to use the original equation to find appropriate relations to eliminate the fourth and sixth derivatives in (4). By using the second order central difference of the fourth derivative of at and the expression for the fourth derivative of from (1) the relation (4) can be expressed in the form where . After using (7), the discretized system corresponding to the compact sixth order approximation scheme for (1)-(2) can be presented as where is the sixth order discrete approximation to . This system can also be rewritten in the form , where In a similar way the preconditioning matrix based on the second order central difference approximation can be presented as Finally the right preconditioned system can be written as

The main goal of this paper is to demonstrate the computational efficiency of a preconditioning technique based on the use of a lower-order approximation discrete system in the numerical implementation of higher order compact finite-difference schemes. To consider this construction in a more general form, we introduce the definition of a th order preconditioned system.

Definition 1. Given two nonsingular matrices and , a system in the form of (12) is said to be a th order preconditioned system if the matrix is diagonalizable and can be expressed as , where ; with , where does not depend on and is an matrix of eigenvectors of .

Using this definition we present the convergence analysis of the GMRES method applied to the system (12) in the following theorem.

Theorem 2. Let a system in the form of (12) be a th order preconditioned system. Then the th iteration of the GMRES method applied to this system satisfies the convergence estimate where and .

Proof. Since the matrix is diagonalizable, it is well known (see, e.g., [23, 24]) that the residual of th iteration of GMRES algorithm satisfies Consider the polynomial of th degree , where and the coefficients satisfy . It is easy to see that the coefficients satisfy and the unique solution of this system is given by . Then the convergence estimation for the GMRES method becomes

The proof is somehow trivial and could be significantly simplified but we present it in such a form to use later in the construction of the simplified iterative method based on this proof.

In the same way we can derive another useful estimate expressed in the following corollary.

Corollary 3. Let the matrix in the system (12) be expressible as , where , , and is the matrix of eigenvectors of . Then the th iteration of the preconditioned GMRES method applied to this system satisfies the convergence estimate

We can derive another well-known estimate based on a standard application of Chebyshev polynomials (see, e.g., [24]). We consider only the case of real valued matrices. The following theorem provides the details of the proposed technique.

Theorem 4. Let the matrix in the system (12) be diagonalizable and expressible in the form , where , , and is the matrix of eigenvectors of . Then the th iteration of the preconditioned GMRES method applied to this system satisfies the convergence estimate

Proof. This estimation immediately follows from the general complex valued matrix result (see, e.g., [24, page 206]).

Next we will apply the developed convergence estimations to the analysis of the solution of the system (12).

Theorem 5. Given , let matrices and be defined by (9) and (11) with and , where is independent of . Then is a second order preconditioned system and one has the convergence estimate

Proof. The matrices and have the same set of eigenvectors as the matrix from (10). The eigenvectors are given by (see, e.g., [17])
The matrix is unitary, so . The eigenvalues of the matrices and are given by respectively. We can write the eigenvalues of the preconditioned matrix in the form The second term in the right-hand side can be estimated using , where and . So is a second order preconditioned matrix and using (13) we obtain the estimate stated in the theorem.

The first condition of this theorem is a common natural restriction that comes to play even when one wants to visualize a numerical solution for qualitative analysis. It is just not an accurate representation of the solution if there are fewer than 5 points per half wavelength. Moreover, to avoid the so-called “pollution” phenomenon Bayliss et al. [5] introduced the restriction that should be constant; that is, to maintain a fixed accuracy of a scheme, the number of grid points must grow as , where is the accuracy order of the scheme. So, if one satisfies the restriction that avoids “pollution” phenomenon, the first condition of the theorem is satisfied almost automatically and it is not much of a restriction at all.

The second condition of the theorem is a requirement that avoids the discrete spectrum of the operator. Since the resolvent set of a bounded linear operator is open, we can always do this by introducing a small change to if necessary.

Next, we present how we can choose in some important cases.

In the simplest case of , we can take .

Consider how to get an estimate for in the case of a given and a sequence of grids with the grid sizes . The first restriction of the theorem gives us the condition . We can see that if the in the argument of was to take on values of all real numbers from to , then the minimum of the expression under consideration would be zero.

Now let us denote and consider for which the expression is zero if . We consider the worst-case scenario to justify this approach for all . It is trivial to find that . So, it is clear that the minimum of the function occurs either at or at since is increasing on the interval . There exists a neighborhood of which includes both and such that which allows us to use the Taylor series to represent in this neighborhood. We use only the first two terms in the series; that is, , . Now we can substitute this expression So, we just need to avoid the values for and for which one of the terms in the brackets is nonpositive. In some situations we would need to choose a sufficiently small grid size step for the coarsest grid in a sequence. For example, in the cases and , . This is independent of and can be used in our convergence analysis on a sequence of grids.

It follows from (19) that the number of iterations for the Krylov subspace algorithm decreases as the grid size decreases. This result proves that the algorithm developed here yields a very effective iterative technique for the implementation of a higher order approximation scheme. Later we will consider the extension of this method to 3D problems.

The proof of Theorem 2 suggests a simplified version of the Krylov subspace method for the solution of (12). Indeed, using the expression for the exact solution of the system (15), we have , for and the following recurrence expression for This allows us to derive the SKS algorithm described in the table labelled Algorithm 1.

(1) Let be the initial approximation, let the tolerance be ,
and let the maximum number of iterations be
(2) , and
(3)
(4) while   and   do
(5) 
(6)  
(7)  
(8)  
(9)  
(10)  
(11)  
(12) end while
(13) is the iterative solution of .

This is a simple stationary two-level method (see, e.g., [20]). However, the choice of the parameters in this algorithm is not based on the minimization of the spectral radius of the iteration matrix on each step but rather on the construction of a particular linear combination of vectors from a Krylov subspace. This is why, for the purpose of this paper, we call this approach the simplified Krylov subspace (SKS) method. The proposed method does not require an orthogonalization procedure; that is, it requires no inner products, which is attractive for an implementation in a parallel computing environment. In addition, this method does not use estimates of maximum and minimum eigenvalues of the preconditioned matrix which is the drawback of the Chebyshev acceleration algorithm (see, e.g., [24]). The Chebyshev acceleration method is naturally constructed in the proof of Theorem 4 and we will compare the numerical effectiveness of the methods in Section 4. We should notice that in most numerical experiments, the GMRES method exhibits the best convergence properties, but in some situations the SKS algorithm proves to be more efficient. Some such examples are considered in Section 4.

The SKS algorithm also gives a good criterion for evaluating the quality of a preconditioner in the form of the order of preconditioned matrix (12). In this paper we will calculate this as follows. Consider two grids with the grid sizes and , where . Let the -norm of the residuals of the SKS method on the first two iterations be and on the first grid and and on the second grid. Let and . Now we can approximately calculate the order of preconditioned matrix by We consider this parameter in the discussion of the results of the numerical experiments.

3. Three-Dimensional Problems

3.1. A Sixth Order Approximation Compact Scheme

In this section we present a three-dimensional compact sixth order approximation finite-difference scheme that was first introduced in [6], and we develop a sixth order compact explicit approximation of the Neumann and Sommerfeld-like boundary conditions (2). In this discussion we consider a uniform grid .

First, we consider a finite-difference approximation of (1) in the form where and and

Using the appropriate derivatives of (1) we can write the sixth order compact approximation of the Helmholtz equation in the form For the two-dimensional case this scheme was proposed in [7]. In the three-dimensional case with Dirichlet boundary conditions, it was developed in [6]. We will consider the iterative implementation of this scheme based on preconditioned Krylov subspace-type algorithms.

3.2. Boundary Conditions

The implementation of the Dirichlet boundary conditions (2) is straightforward but the explicit compact approximation of Neumann and Sommerfeld-like boundary conditions requires careful consideration. The sixth order compact approximation of the Neumann boundary condition in the two-dimensional case was considered in [7]. Here we extend this approach to the three-dimensional problem.

Neumann Boundary Conditions. In this subsection the Neumann boundary conditions are considered in the form

We restrict our consideration to only one side of the computational domain . By using Taylor series, we can derive the sixth order approximation formula To express the third and fifth derivatives in (30) we can differentiate the original Helmholtz equation (1), assuming sufficient smoothness of the solution and the right-hand side. After substitution of these derivatives into (30), the resulting expression is

Next, we approximate the third order mixed derivatives in the second bracket of (31) by a fourth order approximation formula and in the second line using a second order approximation scheme, which yields Now we can simplify this and write

Finally, to complete the explicit scheme for the boundary conditions, we present the previous equation in the form Now by using (3) and replacing with , we can write

This formula allows us to eliminate the term in (28). The explicit implementation of the Neumann boundary conditions on the other boundary pieces can be conducted in a similar way.

Sommerfeld-Like Boundary Conditions. In this section we consider the implementation of the Sommerfeld-like boundary conditions (2) which are often used in scattering problems The difficulty in using similar methods to those described in the previous section can be seen in (35). The last term in this equation requires calculation of the fourth derivative of the right-hand side of (29) and if it depends on an unknown variable , the use of a compact sixth order approximation scheme becomes problematic. To avoid such a difficulty, we introduce a new variable . Then the Helmholtz equation becomes where . Now (36) can be rewritten in the form The sixth order approximation of (38) becomes Assuming sufficient smoothness of and and using (37) and (38), we can express the third and fifth derivatives of in the form

To preserve the sixth order compact approximation in (39) we need to use the fourth order approximation for in (40). The second order approximation is sufficient for the derivatives of in (41). First, let us consider the fourth finite-difference compact approximation for . It can be presented in the form

Now we can use (38), (42) and the second order approximation of (41) to express (39) in the form

This equation gives an implicit compact sixth order approximation for (38). Next, the explicit implementation of the boundary conditions (38) similar to (35) is developed by using

Here, and are parameters. We also use and . We leave the first of these derivatives in the right-hand side but drop the latter two derivatives. Now, by using the second order finite-difference approximation in (44) and adding it to the (43), we derive the formula

Next we choose parameters , , and to match the coefficients in (28). This choice is determined by the conditions

Using these parameters and replacing with , we write

This equation provides an explicit compact sixth order approximation for the boundary condition (36). At the upper boundary , the Sommerfeld-like boundary condition can be written as and the auxiliary substitution becomes . The rest of the derivation is very similar to the derivation for the case of the lower boundary. Implementations of the sixth order compact approximation of the boundary conditions in the other directions are also similar to the calculations already presented.

3.3. Fast Fourier Preconditioner

The main idea of this paper is to utilize the existing lower-order approximation direct solvers as preconditioners in the implementation of a higher resolution scheme. One of the most popular methods for the approximate solution of the Helmholtz equation is the second order central difference scheme. We consider the cases with Dirichlet or Neumann boundary conditions on the sides of the computational domain ; that is, or . At the top and the bottom of the computational domain, we assign one of the three boundary conditions under consideration: Dirichlet, Neumann, or Sommerfeld-like boundary conditions (2). In the case of Neumann and Sommerfeld-like boundary conditions, we use the second order central finite-difference approximation of the first derivative on all boundaries. To express the preconditioner in general form we introduce the matrices and , where , , and are parameters depending on boundary conditions. Next let and . Using Kronecker products, the preconditioning matrix can now be written in the form where in the case of Dirichlet boundary conditions at the sides of the computational domain and in the case of Neumann boundary conditions on the same boundaries. The number of grid points in the -direction is chosen in such a way that the grid step size is the same in all three directions. This condition is imposed so that the lower-order preconditioner and the sixth order approximation compact scheme developed in the previous section use the same grid points. Depending on the boundary conditions at the top and the bottom of the computational domain, the parameters and are defined in the following way: in the case of Dirichlet boundary conditions and ; in the case of Neumann boundary condition and ; and in the case of the Sommerfeld-like boundary conditions and . At each step of the iterative process, the solution to the second equation in (12) with the matrix (49) can be obtained by using FFT-type algorithms in operations.

3.4. Convergence Analysis of 3D Algorithm

In this section, the convergence of the proposed algorithms is considered in the case of the 3D Helmholtz equation with the Dirichlet boundary conditions (1) and (2) on the rectangular computational domain with uniform grid size . To insure the uniqueness of the solution to the original boundary value problem, we assume that the coefficient is in the resolvent set of the Laplace operator defined on an appropriate function space. We also impose a common restriction on the number of grid points per wavelength; that is, . The convergence analysis of the proposed algorithms follows the same lines as in Theorem 5. The preconditioning matrix is given by (49) with and . To express the right-hand side of the sixth order approximation scheme given in (28), we will use the notation introduced in (49). In addition, we define the matrix . Then the resulting matrix can be written as

Both matrices and have the same set of orthonormal eigenvectors (see, e.g., [17]):

The matrix of eigenvectors is unitary so the -norm of and is one. To present the eigenvalues of the matrices, we introduce the notation . Then the eigenvalues of the matrices and in the 3D case are given by

Unfortunately, the eigenvalues of these matrices do not satisfy the hypotheses of Theorem 2, but we still can apply Corollary 3 to prove the convergence of the GMRES method and the SKS algorithm by showing that , where , for all in some simple cases.

Let us assume that ; that is, and . Then , for . It is also easy to see that and for the given values of . Now the eigenvalues of the preconditioned matrix can be written in the form

This gives us a presentation of the eigenvalues of the preconditioned matrix in the form , where , for all . Now we can apply Corollary 3 and obtain an estimate for the rate of convergence of the GMRES and SKS algorithms: where .

Similar to the 1D case, it is possible to show that this estimate holds for and a sequence of grids with the grid sizes if for some and sufficiently small . As in the case of the 1D problem if follows from the fact that the resolvent set of a bounded linear operator is open. The proof is somewhat tedious but elementary, so we skip it. In Table 1 we present lower and upper bounds and for for different values of and for different grid step sizes. Also, the last two rows of Table 1 possible choices for and for different .

This result is weaker than in the 1D case; that is, it says that convergence is independent of the grid step, but it does not improve with the decrease of . In this case, is said to be “an optimal preconditioner” (see, e.g., [21, page 196]). This situation is typical for multigrid-type algorithms but it is not what we would expect based on 1D example. In the following section we present the results of numerical experiments in the case of several test problems which confirm that actual convergence of the presented methods accelerate with the decrease of the grid size; that is, the convergence in the numerical experiments significantly exceeds the estimate (54).

4. Numerical Results

In this section we present the results of numerical experiments which demonstrate the quality of the proposed numerical methods. These algorithms were implemented in Matlab 7.11.0 on an iMac with an Intel Core i7, 2.93 GHz processor. We also use the standard programing implementation of the GMRES method in Matlab (gmres function).

4.1. 1D Test Problem

In the first series of numerical experiments, we consider the convergence of the compact sixth order approximation algorithm on a sequence of grids. In these tests we focus on the numerical solution of the 1D Dirichlet problem for the Helmholtz equation with zero boundary conditions on the interval and . The computational grid in our experiments is defined by , where and is the number of grid points. We consider the function as the exact solution of the original boundary value problem with the right-hand side . The first column of Table 2 displays the step size of the grids in our experiments. Columns 2–4 of the table present the number of iterations until the convergence required by the preconditioned GMRES method, the SKS method (Algorithm 1), and the Chebyshev acceleration (CA) algorithm [24]. In all our experiments we use restarted GMRES method with the restart value 20. We use the stoppage criterion , where is the iterative solution on th iteration. Column 5 reports the approximation error of the numerical solution . In Column 6, the second order approximation error of the numerical solution obtained by using the preconditioner as a solver is presented for comparison. The number of grid points is chosen to satisfy the requirement . The values of the minimum and maximum eigenvalues of the preconditioned matrix used by the Chebyshev acceleration algorithm are calculated exactly by (22).

These experiments demonstrate the sixth order convergence of the numerical solution to the exact solution of the original boundary value problem. Theorem 5 suggests that the residual on th iteration in the presented iterative algorithms decreases with the increase in the number of grid points. The results of the numerical experiments confirm this conclusion. We have already shown that the order of the preconditioned matrix in this case is two. The numerical value of the constant calculated by using (25) is . The 1D numerical experiments confirm the effectiveness of the iterative approaches under consideration and allow us to expect that the same properties hold for 3D problems.

4.2. 3D Test Problems

In the next series of experiments we consider different boundary value problems for the 3D Helmholtz equations (1) and (2). The computational domain is given by with a uniform grid in all three directions and . The numbers of grid points , , and in , and -directions are chosen to provide uniform grid size.

Dirichlet Problem. First, we consider the Helmholtz equation with boundary conditions and . This test is similar to the 1D case with and . As the solution of the boundary value problem we choose the function , where , and .

In the first series of tests we consider the convergence of the compact high order approximation finite-difference scheme (28) on a sequence of grids as well as convergence properties of the GMRES, SKS, and CA iterative algorithms. The iterative process is stopped when the initial residual is reduced by a factor of . The values of the minimum and maximum eigenvalues of the preconditioned matrix used by the Chebyshev acceleration algorithm are calculated exactly using (53). The description of the columns of Table 3 is similar to the description of Table 2.

The test results confirm the sixth order approximation of the compact finite-difference scheme. Though, in our theoretical analysis (53) we could not prove the desired th order preconditioning property in the case of the 3D Dirichlet problem, the SKS algorithm still exhibits acceleration of the convergence with the decrease of grid size. We also mention that the total computer time required for convergence for the SKS and GMRES methods is approximately the same and the Chebyshev acceleration algorithm requires about four times more computer time until convergence than the other two iterative algorithms do. The SKS algorithm has also greater potential for efficient implementation on parallel computers than the GMRES method does.

One potential application of the proposed methods is the electromagnetic scattering problems. In these problems, the typical value of the coefficient is in the range of and . For example, the propagation of an electromagnetic wave with 1 GHz frequency in a vacuum is described by the Helmholtz equation with . In Table 4 we present the convergence of the algorithms presented here on the sequence of grids with different values . In every cell of the table we present the number of iteration for three methods under consideration: GMRES, SKS, and CA. The stoppage criterion is the same as in the previous series of experiments. If any of these three methods fails to converge in 100 iterations, we indicate this by using symbol “>100" and in the case of divergence we use “div” symbol.

From these numerical experiments we can see that the preconditioned GMRES and SKS methods demonstrate excellent convergence properties. We also observe that the convergence of the proposed algorithms improves with the increase in the number of the grid points. In the analysis of the algorithm’s convergence (54), it was shown that the number of iterations does not increase with the increase of the grid points. But the last row in Table 4 indicates that the parameter (25) for all values of is approximately 2, which indicates that similar to the 1D case the convergence of the 3D algorithm not only does not depend on the grid size but accelerates with the increase of the number of grid points. This also indicates that even in the case when the SKS method does not converge, one can expect convergence on a finer grid. In all numerical tests, the parameter is calculated by using the first two iterations of the SKS method on the grids with the grid sizes and .

From the results presented, it is clear that slow convergence and difficulties arising in the choice of the parameters and in the Chebyshev acceleration algorithm (18) make this method a poor choice for the implementation of the developed high-order approximation scheme. These numerical tests illustrate the fact that the Chebyshev polynomial is not optimal on the discrete spectrum and this has a dramatic effect on the iterative methods based on this polynomial. So, in the next numerical experiments we focus on the convergence properties of the GMRES and SKS methods in our numerical framework.

Dirichlet-Neumann Boundary Conditions. In the next series of experiments we replace the Dirichlet boundary conditions in (2) with at the top of the computational domain () and with the Neumann boundary conditions at . On the other boundaries of the rectangular computational domain we still use the Dirichlet boundary conditions . As a test function that satisfies the boundary conditions we consider , where , and . Also, . The number of grid points is chosen to provide the same grid step in all three directions; that is . The stoppage criterion is the same as in the previous numerical experiments. Table 5 provides the results of the numerical tests on a sequence of grids with the coefficient . As already mentioned we focus only on the two best algorithms: GMRES and SKS. The rest of Table 5 is similar to Table 3.

The results of this series of numerical tests indicate the sixth order convergence of the resulting compact finite-difference scheme including explicit approximation of the Neumann boundary conditions. As expected, the number of iterations decreases with the increase of the number of grid points in both the GMRES and SKS approaches. In Table 6 we present the number of iterations required for convergence for both algorithms on a sequence of grids and with different coefficients in the Helmholtz equation. The content of Table 6 is similar to the content of Table 4, except for the number of iterative approaches under consideration. We present only the number of iterations for the GMRES and SKS methods.

It follows from this table that the GMRES approach is much more efficient on coarse grids but on finer grids, the processor time for the SKS method is the same or even smaller than for the GMRES algorithm. For example, on the grid with and the corresponding processor times for the GMRES and SKS methods are 1467 sec and 1052 sec. Also, as we mentioned before, the SKS method has much greater potential than the GMRES algorithm for efficient implementation on parallel computers. We also expect that even if the SKS method does not converge on coarser grids, by reducing the grid size one can achieve convergence of this algorithm. We can see this in the example of the convergence of the methods under consideration in the case of . The values of the parameter vary significantly for different values of but still indicates that the convergence of the iterative algorithms improves with the increase in the number of the grid points.

Dirichlet-Sommerfeld-Like Boundary Conditions. In the last series of numerical experiments, we consider the boundary value problem for the Helmholtz equation (1) with a combination of Sommerfeld-like (radiation) boundary conditions (2) at and , and Dirichlet boundary conditions at all other boundaries of the rectangular computational domain . The source function in (1) selected that the true solution is , where , , and . Note that the analytic solution satisfies the radiation boundary condition (2). The numbers of grid points in -directions are chosen such that . First we consider the convergence of our sixth order approximation scheme on the sequence of grids with . The main goal of this series of numerical experiments is to investigate the convergence properties of the new explicit compact sixth order approximation of the Sommerfeld-like boundary conditions (47) proposed in this paper. As in previous test runs, the iterative process is stopped when the -norm of the initial residual is reduced by a factor of . The results are presented in Table 7. The description of data presented in this table is the same as in the case of Table 5.

In this series of numerical experiments, both algorithms exhibit the same convergence properties as in the previous series of tests; that is, the number of iterations decreases as the number of grid points increases. From Table 7, we can also see the sixth order convergence of the approximate solution to the exact solution of the boundary value problem on a sequence of grids. These results confirm the sixth order approximation of the compact explicit scheme proposed for the numerical implementation of the Sommerfeld-like boundary conditions.

In Table 8, we present the number of iterations required for the convergence of the GMRES and SKS methods for different coefficients in the Helmholtz equation with the same boundary conditions used in the previous series of numerical tests. Data presented in Table 8 are similar to the data in Table 6.

In the last column of Table 8, we present the results of calculations when is a complex valued coefficient. The value of this coefficient approximately corresponds to the case of propagation of electromagnetic waves with the 1 GHz frequency in dry soil. These results confirm the effective implementation of the compact sixth order approximation scheme by using the proposed Krylov subspace type algorithms in the framework with the FFT-based low order preconditioners. The last row of the table suggests that the acceleration of the convergence with the increase of the number of grid points is much stronger than the acceleration in the previous series of experiments. It seems strongly dependent on the boundary conditions used in the numerical tests. In all our experiments, there is a clear connection between the parameter and the improvement of convergence with the decrease of the step side. But the usefulness of this parameter in the analysis of the quality of a preconditioner requires further analysis since in majority of our numerical experiments the preconditioner does not satisfy the condition in Definition 1.

The series of test problems considered suggests that in the majority of situations the preconditioned GMRES method is the most efficient choice for an effective implementation of the compact sixth order approximation scheme on the coarse grids but in the case of finer grids the SKS method in combination with lower-order approximation preconditioner presents an efficient alternative to the GMRES method. This alternative could become even more valuable when the GMRES method experiences stagnation.

5. Conclusions

New 3D compact sixth order explicit finite-difference schemes for the approximation of Neumann and Sommerfeld-like boundary conditions on rectangular computational domains with uniform grid size were developed and implemented. Together with the compact sixth order approximation scheme for the Helmholtz equation proposed in [6], these algorithms represent highly accurate methods for the solution of boundary value problems for Helmholtz equations.

A new rapid iterative method based on preconditioned Krylov subspace methodology was developed for the implementation of the proposed compact finite-difference schemes. The strategy is based on a combination of higher order approximation schemes and a lower-order approximation preconditioner. The analysis of some typical test problems reveals the attractive properties of the developed methods such as the decrease of the number of iterations until convergence with the increase of the number of the grid points or the size of the resulting matrix. This approach is especially attractive in situations in which the lower approximation solver already exists and the original boundary value problem calls for more accurate approximation.

The typical time to produce the sixth order accuracy solution of the 3D Helmholtz equation with a combination of the Dirichlet and Sommerfeld boundary conditions on a grid was just 30 minutes on an iMac using only one 2.93 GHz Intel Core i7 processor. The method was tested for realistic parameter ranges typical for electromagnetic scattering problems. We must notice that the Sommerfeld-like boundary is just a first order approximation of the Sommerfeld conditions on the unbounded domain. So, direct application of the higher order approximation for the Sommerfeld-like boundary condition to the solution of a scattering problem is not always justified. However, a straightforward extension of the approximation approach presented in this paper could be applied for approximation of the absorbing boundary conditions (see, e.g., [13]) or can be used in the implementation of the perfectly matched layer (PML) boundary conditions (see, e.g., [25]).

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The author thanks Professor T. Payne and Professor D. Stowe for valuable comments and discussions.