Preconditioning Techniques for Sparse Linear SystemsView this Special Issue
A Modified SSOR Preconditioning Strategy for Helmholtz Equations
The finite difference method discretization of Helmholtz equations usually leads to the large spare linear systems. Since the coefficient matrix is frequently indefinite, it is difficult to solve iteratively. In this paper, a modified symmetric successive overrelaxation (MSSOR) preconditioning strategy is constructed based on the coefficient matrix and employed to speed up the convergence rate of iterative methods. The idea is to increase the values of diagonal elements of the coefficient matrix to obtain better preconditioners for the original linear systems. Compared with SSOR preconditioner, MSSOR preconditioner has no additional computational cost to improve the convergence rate of iterative methods. Numerical results demonstrate that this method can reduce both the number of iterations and the computational time significantly with low cost for construction and implementation of preconditioners.
The finite difference method is one of the most effective and popular techniques in computational electromagnetics and seismology, such as time-harmonic wave propagations, scattering phenomena arising in acoustic and optical problems, and electromagnetics scattering from a large cavity. More information about applications of this method in electromagnetics can be found in [1–5].
In this paper, we focus on the following form of the complex Helmholtz equation: Here is a bounded region in . and are real continuous coefficient functions on , while and are given continuous functions on and , respectively.
To conveniently find numerical solutions of (1.1), the Laplace operator is approximated by using the second-order accurate 5-point difference stencil: Making use of the above stencil, the following linear system is obtained: where is a diagonal matrix whose diagonal elements are just the values of at the mesh points and is the symmetric positive definite -matrix arising from the discrete Laplace operator and is of the block tridiagonal form
Obviously, from the linear systems (1.3), it is not difficult to find that the matrix is a complex symmetric coefficient matrix. Matrix becomes highly indefinite and ill-conditioned as is a sufficiently large positive number. So, large amount of computation times and memory are needed in order to solve the linear systems (1.3) efficiently.
As is well known, direct methods and iterative methods can be employed to solve the linear systems (1.3). The former is widely employed when the order of the coefficient matrix is not too large and is usually regarded as robust methods. The memory and the computational requirements for solving the large sparse linear systems may seriously challenge the most efficient direct solution method available today. Currently, the latter employed to solve the large sparse linear systems is popular. The reason is that iterative methods are easier to implement efficiently on high performance computers than direct methods. In practice, a natural choice is that we make use of iterative methods instead of direct methods to solve the large sparse linear systems.
At present, Krylov subspace methods are considered as one kind of the important and efficient iterative techniques for solving the large sparse linear systems because these methods are cheap to be implemented and are able to fully exploit the sparsity of the coefficient matrix. It is well known that the convergence speed of Krylov subspace methods depends on the distribution of the eigenvalues of the coefficient matrix . When the coefficient matrix is typically extremely ill-conditioned and highly indefinite, the convergence of Krylov subspace methods can be unacceptably slow. In this case, Krylov subspace methods are not competitive without a good preconditioner. That is, preconditioning technique is a key ingredient for the success of Krylov subspace methods in applications. The idea of preconditioning technique is based on consideration of the linear system with the same solution as the original equation. The problem is that each preconditioning technique is suited for a different type of problem. Until current days, no robust preconditioning technique appears for all or at least much types of problems. Finding a good preconditioner to solve a given large sparse linear systems is often viewed as a combination of art and science.
In recent years, a great deal of effort has been invested in solving indefinite linear systems from the discrete Helmholtz equations. Most of the work has been aimed at developing effective preconditioning techniques. In general, there exist two classes of preconditioners for Helmholtz equations: the “operator-based" preconditioning technique and the “matrix-based" preconditioning technique.
The former is built based on an operator, such as the Laplace preconditioner [2, 3, 7–9], Analytic ILU , the Separation-of-variables . The purpose of this class of preconditioners is that the spectrum of the corresponding preconditioned matrix is favorably clustered. Its advantage is that this operator does not have to be a representation of the inverse of the Helmholtz operator.
The latter is established based on an approximation of the inverse of the coefficient matrix. For this class, one of the natural and simplest ways of structuring a preconditioner is to employ a diagonal or block diagonal of the coefficient matrix as a preconditioner . The above two diagonal preconditioners have no remarkable reduce with respect to the iterative number and CPU time. Another one of the simplest preconditioners is to perform an incomplete factorization (ILU) of the coefficient matrix . The main idea of ILU factorizations depends on the implementation of Gaussian elimination which is used, see the survey  and the related references therein.
When the coefficient matrix of the linear systems (1.1) is complex symmetric and indefinite, it is difficult to solve iteratively. Using the symmetric successive overrelaxation (SSOR) as a preconditioner preserves the symmetry of the iterative matrix and also taking little initialization cost, which in some cases makes it preferable over other factorization methods such as ILU. So far, some variant SSOR preconditioning techniques have been proposed to improve the convergence rate of the corresponding iterative method for solving the linear systems. Mazzia and Alan  introduced a shift parameter to develop a shifted SSOR preconditioner for solving the linear systems from an electromagnetics application. Bai in  used a (block) diagonal matrix instead of the diagonal matrix of the coefficient matrix to establish a modified (block) SSOR preconditioner for the second-order selfadjoint elliptic boundary value problems. Chen et al.  used a diagonal matrix with a relaxation parameter instead of the diagonal matrix of the coefficient matrix to establish a modified block SSOR preconditioner for the Biot’s consolidation equations. Ran and Yuan in  also discussed a class of modified (block) SSOR preconditioners for linear systems from steady incompressible viscous flow problems. We refer the reader to [18, 19] for a general discussion.
Although the SSOR preconditioner with Krylov subspace methods can improve convergence improvement, the disadvantage of the SSOR preconditioner is that the convergence rate may still remain unsatisfactorily slow in some cases, especially in indefinite linear systems. In this paper, a modified symmetric successive overrelaxation (MSSOR) preconditioning strategy is presented, which can significantly improve the convergence speed and CPU time. Our motivation for this method arises from the solution of complex symmetric and indefinite linear systems from the discrete Helmholtz equations. The idea is to increase the values of diagonal elements of the coefficient matrix to obtain better preconditioners for the original linear systems, which is different from [14–17]. This modification does not require any significant computational cost as compared with the original SSOR preconditioner and also requires no additional storage cost.
The remainder of this paper is organized as follows. In Section 2, the MSSOR preconditioner for solving the resulting linear system is presented. In Section 3, numerical experiments are given to illustrate the efficiency of the presented preconditioner. Finally, in Section 4 some conclusions are drawn.
2. Modified SSOR Preconditioner
To improve the convergence rate of iterative methods, an appropriate preconditioner should be incorporated. That is, it is often preferable to solve the preconditioned linear system as follows: where , called the preconditioner, is a nonsingular matrix. The choice of the preconditioner plays an important role in actual implements. In general, the preconditioner is chosen such that the condition number of the preconditioned matrix is less than that of the original matrix . Based on the excellent survey of  by Benzi, a good preconditioner should meet the following requirements:(1)the preconditioned system should be easy to solve;(2)the preconditioner should be cheap to construct and apply.
Certainly, the best choice for is the inverse of . However, it is unpractical in actual implements because the cost of the computation of may be high. If is a symmetric positive definite matrix, the approximation of can be replaced by SSOR or multigrid. However, in fact, the Helmholtz equation often results in an indefinite linear system, for which SSOR or multi-grid may be not guaranteed to converge.
To introduce the modified SSOR preconditioner, a brief review of the classical and well-known SSOR preconditioner is needed. The SSOR preconditioner is established by the SSOR iterative method, which is a symmetric version of the well-known SOR iterative method. Based on matrix splitting, the coefficient matrix is split as follows: where and are the diagonal parts and strictly lower triangular of . According to the foregoing matrix splitting (2.2), the standard SSOR preconditioner [6, 20] is defined by It is difficult to show theoretically the behavior of a preconditioner when the coefficient matrix is a large, sparse, and symmetric indefinite. The SSOR iterative method is not convergent, but may be still used as a preconditioner. By a simple modification on the original indefinite linear systems (1.3), we establish the following coefficient matrix: where Obviously, is a symmetric and positive stable -matrix. To increase the values of diagonal elements of the coefficient matrix to obtain better preconditioners for the original linear systems and reduce computation times and amount of memory, based on (2.4), the MSSOR preconditioner is defined by with This idea is based on an absolute diagonal scaling technique, which is cheap and easy to implement.
Since the coefficient matrix of the linear systems (1.3) is neither positive definite nor Hermitian with being a sufficiently large positive number, the Conjugate Gradient (CG) method  may breakdown. To solve the complex symmetric linear systems, van der Vorst and Melissen  proposed the conjugate orthogonal conjugate gradient (COCG) method, which is regarded as an extension of CG method.
To solve the linear systems (1.3) efficiently, (1.3) is transformed into the following form with the preconditioner , that is, Then the MSSOR preconditioned COCG (PCOCG) method can be employed to solve the preconditioned linear systems (2.8).
In the following, we give the MSSOR preconditioned COCG method for solving the linear systems (2.8). The MSSOR preconditioned COCG (PCOCG) algorithm is described as follows
Algorithm PCOCG : given an initial guess (1);(2);(3);(4)for (5);(6);(7); if then quit (failure);(8);(9)(10)if is accurate enough, then quit (convergence);(11);(12); if too small, then quit (failure);(13);(14)End for .
It is not difficult to find that the main computation of algorithm PCOCG involves one matrix-vector multiplication and two triangular linear systems. These computations are very easy to implement. The main advantage is no extra computational cost in construction of MSSOR preconditioner.
Note that the transpose in all dot products in this algorithm is essential . Meanwhile, note that two different breakdowns of this algorithm may occur: one is that if is too small, but exists (line 12), algorithm PCOCG breaks down and the other is that when the search direction , but (line 7), algorithm PCOCG breaks down. The breakdown can be fixed to some extent by restarting the process , such as the restarted process in GMRES . However, breakdown scarcely happens in the actual computation of the Helmholtz equation.
3. Numerical Experiments
In this section, some numerical experiments are given to demonstrate the performance of both preconditioner and preconditioner for solving the Helmholtz equation.
Example 3.1 (see ). Consider the following complex Helmholtz equation: where and and are real constants. Discretizing (3.1) with the approach above in introduction, we obtain the complex symmetric indefinite linear systems , and and are adjusted such that .
All tests are started from the zero vector, preformed in MATLAB with machine precision . The COCG iteration terminates if the relative residual error satisfies or the iteration number is more than 500.
In Tables 1, 2, 3, and 4, we present some iteration results to illustrate the convergence behaviors of the COCG method preconditioned by and to solve the complex symmetric indefinite linear systems with the different values of and . In Tables 1–4, denotes the values of and . “CPU(s)’’ denotes the time (in seconds) required to solve a problem. “IT’’ denotes the number of iteration.
From Tables 1–4, it is not difficult to find that when the COCG method preconditioned by and is used to solve the complex symmetric indefinite linear systems, the convergence rate of the preconditioner is more efficient than that of the preconditioner by the iteration numbers and CPU time. That is, the preconditioner outperforms the preconditioner under certain conditions. Compared with the preconditioner , the preconditioner may be the “preferential” choice under certain conditions.
In this paper, MSSOR preconditioned COCG algorithm has been applied for solving the complex symmetric indefinite systems arising from Helmholtz equations. Due to the reduction of the iteration numbers and CPU time, the MSSOR preconditioner presented is feasible and effective. Without extra costs, MSSOR preconditioner is more efficient than SSOR preconditioner.
The authors are grateful to the referees and the editors for their helpful suggestions to improve the quality of this paper. The research of this author was supported by NSFC Tianyuan Mathematics Youth Fund (11026040).
Y. Saad, Iterative Methods for Sparse Linear Systems, PWS, Boston, Mass, USA, 1996.
J. Gozani, A. Nachshon, and E. Turkel, “Conjugate gradient comipled with multi-grid for an indefinite problem,” in Advances in Computer Methods for Partial Differential Equations, R. Vichnevestsky and R. S. Tepelman, Eds., vol. 5, pp. 425–427, IMACS, New Brunswick, NJ, USA, 1984.View at: Google Scholar
A. L. Laird, “Preconditioned iterative solution of the 2D Helmholtz equation,” First Year's Report 02/12, Hugh's College, Oxford, UK, 2002.View at: Google Scholar
M. Gander, “AILU for Helmholtz problems: a new preconditioner based on the analytic parabolic factorization,” Journal of Computational Acoustics, vol. 9, no. 4, pp. 1499–1506, 2001.View at: Google Scholar
F. Mazzia and R. Alan McCoy, “Numerical experimetns with a shifted SSOR preconditioner for symmetric matrices,” type TR/PA/98/12, CERFACS, 1998.View at: Google Scholar
O. Axelsson, “A generalized SSOR method,” BIT, vol. 18, pp. 443–467, 1972.View at: Google Scholar
O. Axelsson, Iterative Solution Methods, Cambridge University Press, New York, NY, USA, 1995.
H. A. van der Vorst and J. B. M. Melissen, “A Petrov-Galerkin type method for solving , where A is symmetric complex,” IEEE Transactions on Magnetics, vol. 26, pp. 706–708, 1990.View at: Google Scholar