The implementation of and the solution to large computational models is becoming quite a common effort in both engineering and applied sciences. The accurate and cost-effective numerical solution to the sequence of linearized algebraic systems of equations arising from these models often represents the main memory-demanding and time-consuming task. Direct methods for sparse linear systems often represent the de facto solver in several commercial codes on the basis of their robustness and reliability. However, these methods scale poorly with the matrix size, especially on three-dimensional problems. For such large systems, iterative methods based on Krylov subspaces can be a much more attractive option. A significant number of general-purpose Krylov subspace, or conjugate gradient-like, solvers have been developed during the 70s through the 90s. Interest in these solvers is growing in many areas of engineering and scientific computing. Nonetheless, to become really competitive with direct solvers, iterative methods typically need an appropriate preconditioning to achieve convergence in a reasonable number of iterations and time.

The term “preconditioning” refers to “the art of transforming a problem that appears intractable into another whose solution can be approximated rapidly” [L.N. Trefethen and D. Bau, Numerical Linear Algebra, SIAM, Philadelphia, 1997], while the “preconditioner” is the operator that is responsible for such a transformation. It is widely recognized that preconditioning is the key factor to increasing the robustness and the computational efficiency of iterative methods. Generally speaking, there are three basic requirements for a good preconditioner: (i) the preconditioned matrix should have a clustered eigenspectrum away from 0; (ii) the preconditioner should be as cheap to compute as possible; (iii) its application to a vector should be cost-effective. It goes without saying that these are conflicting requirements, and an appropriate trade-off must be found for any specific problem at hand. Unfortunately, theoretical results are few, and frequently somewhat “empirical” algorithms may work surprisingly well despite the lack of a rigorous foundation. This is why finding a good preconditioner for solving a sparse linear system can be viewed rather as “a combination of art and science” [Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, 2003] than a rigorous mathematical exercise.

Research on the construction of effective preconditioners has significantly grown over the last two decades. Currently, preconditioning appears to be a much more active and promising research field than either direct or iterative solution methods, particularly within the context of the fast evolution of the hardware technology. On one hand, this is due to the understanding that there are virtually no limits to the available options for obtaining a good preconditioner. On the other hand, it is also generally recognized that an optimal general-purpose preconditioner is unlikely to exist. There are undoubtedly fertile grounds for research in improving the solution efficiency of a specific problem within a specific computing environment. From this viewpoint, the knowledge of the governing physical processes, the structure of the resulting discrete system, and the prevailing computer technology are essential aspects that cannot be ignored when addressing the development of an appropriate preconditioning technique.

The present special issue of Journal of Applied Mathematics is intended to provide some insight on efficient preconditioning techniques in computational engineering and sciences. This special issue should not be viewed as a comprehensive survey or an exhaustive compilation of all the current trends in preconditioning research. However, the papers included in this volume cover different algorithms and applications, thus offering an overview of the recent advances achieved in this field and suggesting potential future research directions.

The special issue contains 11 articles addressing the numerical solution to sparse linear systems and eigenproblems arising from different applications, including the Helmholtz equation in electromagnetics and seismology, steady and unsteady Navier-Stokes equations for incompressible flow, soil consolidation with nonassociated plasticity, optimal control problems governed by coupled elliptic-type equations, graph approaches for electric power networks, industrial processes and traffic models, and the equilibrium of multibody discrete systems. The different linear systems and eigenproblems arising from these applications are addressed using (a) algebraic preconditioners, that is, general-purpose algorithms requiring the knowledge of the coefficient matrix only, and (b) problem-specific preconditioners, that is, specialized methods based on the peculiar structure of the mathematical problem at hand. In group (a), novel formulations of the algebraic multigrid (AMG) method are proposed and tested, using aggregation and parallel wavelet techniques as smoothers in a parallel environment, modified symmetric successive overrelaxation (MSSOR) iterations are introduced in both electromagnetics and soil consolidation, and factorized sparse approximate inverse- (FSAI-) based algorithms are developed for large symmetric positive definite parallel eigenanalyses. In group (b), an important role is played by saddle-point matrices, which can be encountered in several different applications. Such problems are typically addressed by specific algorithms, which can also make use of algebraic preconditioners as a kernel. Three papers are devoted to the development and testing of methods for saddle-point matrices, using the Hermitian/Skew-Hermitian (HSS) approach, the semi-implicit method for pressure linked equations (SIMPLE), and a relaxed splitting preconditioner. Finally, problem-specific approaches based on the mathematical structure of the native application are considered for the solution to optimal control problems discretized with spectral elements, the equilibrium of multibody discrete systems, and the observability of physical processes through a graph approach.

Acknowlegments

We would like to thank the authors for their valuable contributions and the anonymous reviewers for their thorough and insightful comments that brought this special Issue to fruition.

Massimiliano Ferronato
Edmond Chow
Kok-Kwang Phoon