- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Journal of Electrical and Computer Engineering

Volume 2010 (2010), Article ID 972794, 33 pages

http://dx.doi.org/10.1155/2010/972794

## Milestones in the Development of Iterative Solution Methods

Institute of Geonics AS CR, 70800 Ostrava, Czech Republic

Received 10 June 2010; Accepted 27 August 2010

Academic Editor: Christian Schlegel

Copyright © 2010 Owe Axelsson. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Iterative solution methods to solve linear systems of equations were originally formulated as basic iteration methods of defect-correction type, commonly referred to as Richardson's iteration method. These methods developed further into various versions of splitting methods, including the successive overrelaxation (SOR) method. Later, immensely important developments included convergence acceleration methods, such as the Chebyshev and conjugate gradient iteration methods and preconditioning methods of various forms. A major strive has been to find methods with a total computational complexity of optimal order, that is, proportional to the degrees of freedom involved in the equation. Methods that have turned out to have been particularly important for the further developments of linear equation solvers are surveyed. Some of them are presented in greater detail.

#### 1. Introduction

In many applications of quite different types appearing in various sciences, engineering, and finance, large-scale linear algebraic systems of equations arise. A particular type of problems appear in signal processing. This also includes nonlinear systems of equation, which are normally solved by linearization at each outer nonlinear iteration step, but they will not be further discussed in this paper.

Due to their high demand of computer memory and computer time, which can grow rapidly with increasing problem size, direct solution methods, such as Gaussian elimination, are in general not feasible unless the size of the problem is relatively small. In the early computer age, when available size of computer central memories was very small and the speed of arithmetic operations slow, this was found to be the case even for quite modest-sized problems.

Even for modern computers with exceedingly large memories and very fast arithmetics it is still an issue because nowadays one wants to solve much more involved problems of much larger sizes, for instance to enable a sufficient resolution of partial differential equation problems with highly varying (material) coefficients, such as is found in heterogeneous media. Presently problems with up to billions of degrees of freedom (d.o.f.) are solved. For instance, if an elliptic equation of elasticity type is discretized and solved on a 3D mesh with 512 meshpoints in each coordinate direction, then an equation of that size arises.

A basic iteration method to solve a linear system where is nonsingular, has the following form.

Given an initial approximation , for until convergence, let ,, . Here, is a parameter to be chosen.

This method can be described either as a defect —correction method or, alternatively, as a method to compute the stationary solution of the evolution equation by timestepping with time-step , that is,

Such methods are commonly referred to as Richardson iteration methods (e.g., see [1–4]). However, already in 1823 Gauss [5] wrote, “Fast jeden Abend mache ich eine neue Auflage des Tableau, wo immer leicht nachzuhelfen ist. Bei der Einförmigkeit des Messungsgeschäfts gibt dies immer eine angenehme Unterhaltung; man sieht daran auch immer gleich, ob etwas Zweifelhaftes eingeschlichen ist, was noch wünschenswert bleibt usw. Ich empfehle Ihnen diesen Modus zur Nachahmung. Schwerlich werden Sie je wieder direct eliminieren, wenigstens nicht, wenn Sie mehr als zwei Unbekannte haben. Das indirecte Verfahren läßt sich halb im Schlafe ausführen oder man kann während desselben an andere Dingen denken.” (Freely translated, “I recommend this modus operandi. You will hardly eliminate directly anymore, at least not when you have more than two unknowns. The indirect method can be pursued while half asleep or while thinking about other things.”)

It holds that or where is the iteration error and is the solution of (1).

Hence,

For convergence of the method, that is , the parameter must in general be chosen such that , where is a matrix norm, subordinate to the chosen vector norm. (We remark here that this is not possible if is indefinite.)

Let denote the spectral radius of a matrix, that is, the maximal absolute value of the eigenvalues of the matrix.

If is self–adjoint, then it can be shown that , where denotes the matrix norm subordinate to the Euclidian vector norm. For general, nonsymmetric matrices it has been shown (e.g., see [6, page 162]) that there exist matrix norms that are arbitrarily close to the spectral radius. These can, however, correspond to an unnatural scaling of the matrix.

The rate of convergence is determined by the convergence factor . For symmetric positive definite matrices, the optimal value of to minimize , is , where , are the extreme eigenvalues of . Normally, however, the eigenvalues are not available.

As an example, for second order elliptic diffusion type of problems in using a standard central difference or a finite element method, the spectral condition number , where is the (constant) meshsize parameter. Hence, the number of iterations to reach a relative accuracy is of order .

Since each iteration uses elementary arithmetic operations, this shows that the total number of operations needed to reduce the error to a given tolerance is of order . This is in general smaller than for a direct solution method when , but still far from the optimal order, , that we aim at.

To improve on this, often a splitting of the matrix is used. It is readily shown that for any initial vector, the number of iterations required to get a relative residual, , for some , is at most , where denotes the integer part. Frequently, , where is a constant, is a positive integer, often and is a small number, typically , which decreases with increasing problems size . This implies, that the number of iterations is propotional to , which number increases rapidly when .

For , the splitting of in two terms is used, where is nonsingular. The iterative method (4) then takes the form Method (7) is convergent if . Splitting methods will be discussed in Section 2.

Let . If is known and , we can use the following theorem to get a test when the iteration error is small enough, that is, when to stop the iterations.

Theorem 1. *Let , , and be defined by (7). Then,
*

*Proof. *From (7) follows and, by recursion,
Note now that Hence, by the triangle inequality and (9)
Letting and noting that , (8) follows.

The basic iteration method (4) or the splitting methods in Section 2, can be improved in various ways. This will be the major topic of this paper.

Note first that application of the splitting in (7) requires in general that the matrix is given in explicit form, which can make the method less viable.

The most natural way to improve (4) is to introduce an approximation of , to be used when the correction in (4) is computed. The relation is then replaced by .

Such a matrix is mostly called preconditioner since, by a proper choice, it can significantly improve the condition number of , that is, where .

Clearly, in practice, the matrix must be chosen such that the linear systems with can be solved with relatively little expense compared to a solution method for . In particular, the expense for is much smaller than that for using a direct solution method.

For badly scaled matrices a simple, but often practically useful, choice of is the (block) diagonal part of . Much more efficient choices will be discussed later in the paper.

Early suggestions to use such a matrix can be found in papers by D'Yakonov [7] and Gunn [8]. For an automatic scaling procedure, see [9] and references therein.

In the present paper, we will survey various choices of which have proven to be useful in practice. The paper attempts to give a more personal account of the development of iterative solution methods. It is also not our ambition to present the present state- of- the- art but rather to describe the unfolding of the field.

In the remainder of the paper, we discuss, in order, methods based on splitting of the given matrix, the accelerated iterative methods of the Chebyshev and (generalized) conjugate gradient types, pointwise and block incomplete factorization preconditioning methods, symmetrized preconditioners of SSOR and ADI type, approximate inverses, and augmented subspace preconditioning methods. If space had allowed it, it would have been followed by presentation of geometric and algebraic multigrid methods, two-level and multilevel methods, elementwise preconditioning methods, and domain decomposition methods. Also, iteration error estimates and influence of rounding errors, and preconditioners for matrices of saddle point type would have been included. The paper ends with some concluding remarks.

The following relations will be used; if , then denotes the transpose of , while , denotes the Hermitian transpose.

#### 2. Splitting Methods

A comprehensive, early presentation of splitting methods, and much more on iterative solution methods, is found in Varga [10]. Often there is a natural splitting of the given matrix as

where is nonsingular. This can be used to formulate an iterative solution method in the form

This method converges if .

*Definition 1. *(a) matrix is said to be *monotone* if is nonsingular and (componentwise).

(b) is called a *regular splitting* [10], if is monotone and .

(c) a *weak regular splitting* [11], if is monotone and .

(d) a *nonnegative splitting* [12], if is nonsingular and .

The following holds, see, for example, [6].

Theorem 2. * Let be a nonnegative splitting of . Then, the following properties are equaivalent: *(a)*, that is, is a convergent splitting, *(b)* is monotone, *(c)* is nonsingular and .*(d)* is nonsingular and , where .*

Corollary 1. *If is a weak regular splitting, then the splitting is convergent if and only if is monotone.*

*Proof. *(see, e.g., [6]).

A splitting method that became popular in the fifties is the SOR method. Here, , where is the (block) diagonal and , are the (block) lower and upper triangular parts of , respectively. The successive relaxation method takes the form where is a parameter, called the relaxation parameter. For one gets the familiar Gauss-Seidel method (Gauss 1823 [5] and Seidel 1814 [13]) and for the successive overrelaxation (SOR) method (Frankel 1950 [14] and Young 1950 [15]).

For the iteration matrix in (14),

it holds that , where the upper bound is sharp. Therefore, the relaxation method is divergent for and (see, e.g., [6, 16]).

An optimal value of can be determined as follows. Assume then that has property , that is, there exists a permutation matrix such that is a block tridiagonal matrix. The following Lemma holds.

Lemma 1 (see [15]). *Assume that has property and let . Let . Then,*(a)*if is an eigenvalue of and satisfies
then, is an eigenvalue of ,*(b)*if is an eigenvalue of and satisfies
then, is an eigenvalue of .*

*Proof. *For a short proof, see [6].

Theorem 3. * Assume that *(a)* has property , and *(b)*the block matrix has only real eigenvalues. ** Then, the SOR method converges for any initial vector if and only if and . Further, we have
**
for which the asymptotic convergence factor is given as
*

*Proof. *Fort a short proof, see [6].

The eigenvalues of are in general complex, and for it can be shown that they are distributed around a circle in the complex plane. This implies that the method can not be polynomially accelerated. (See Section 3 for a presentation of polynomial acceleration methods.) Further, the efficiency of the SOR method turns out to be critically dependent on the choice of .

A similar result as in Theorem 3 has been shown in [6], see also [17], that holds even if does not have property , but is Hermitian.

Theorem 4. *Let be Hermitian and positive definite and let
**
where , and let . Then,
**
where
**
Further, if , then
**
minimizes the upper bound of and we have
*

For a proof, see [6].

In Section 4, we present a symmetric version of the SOR method where acceleration is possible.

#### 3. Accelerated Iterative Methods

In this section, the important Chebyshev and conjugate gradient iteration methods are presented.

Consider first the iterative method (4) with variable time-steps , Here, is a sequence of iteration (acceleration) parameters. If , , we talk about a stationary iterative method, otherwise about a nonstationary or semiiterative method.

Let , the iteration error. Then, it follows from (25) that , , so (and ). Here, a polynomial of degree having zeros at and satisfying .

We want to choose the parameters such that is minimized. However, this would mean that in general the parameters would depend on , which is not known. Also the eigenvalues of are not known. We then take the approach of minimizing for all ; that is, we want to minimize .

##### 3.1. The Chebyshev Iterative Method

In case the eigenvalues of are real and positive and if a positive lower (a) and (b) an upper bound are known of the spectrum, then we see that should be chosen such that is minimized over all that is, over the set of polynomials of degree satisfying .

The solution to this min-max problem is well known, where are the Chebyshev polynomials of the first kind. The corresponding values of satisfywhich are the zeros of the polynomial. The corresponding method is referred to as the Chebyshev (one-step) acceleration method, see, for example, [10, 18]. It is an easy matter to show that

This implies that if the number of iterations satisfies , that is, in particular if then .

The disadvantage with this method is that to make it effective one needs accurate estimates of and , and we need to determine beforehand (which, however, can be done by (29)). The method cannot utilize any special distribution of the eigenvalues in the spectrum (as opposed to the conjugate gradient method, see below). More important, however, is that this method is actually numerically unstable (similarly to an explicit time stepping method for initial value problems when the time steps are too large). This is due to the fact that is much larger than unity for several of the values . However, one may prove that with some particular permutation of the parameters, their instability effect can be avoided.

There is an alternative to the choice (25). Namely, there exists a three term form of the Chebyshev acceleration method

where .

Here, the parameters are chosen as ,

Hence, we do not have to determine the number of steps beforehand. More importantly, it has been shown in [18] that this method is numerically stable. (For some related remarks, see [6]). A similar form of the method was proposed a long time ago, see Golub and Varga [19] and the references cited therein.

It is interesting to note that the parameters approach stationary values. If and has eigenvalues in , (the spectral radius of ), thenwhich is recognized as the parameter of the optimal SOR method (see Section 2). Young [20] has proven that the asymptotic rate of convergence is retained even if one uses the stationary values throughout the iterations.

For the case of complex eigenvalues of with positive real parts and contained in an ellipse one may choose parameters similarly. See [6, 21, 22] for details. For comments on the optimaly of the method, see [23]. For application of the method for nonsymmetric problems, see [6, 24].

Perhaps the main thrust during the 1970 has been in using the conjugate gradient method as an acceleration method. Already much has been written on the subject; we refer to [25–27] for a historical account, to [18, 28–33] for an exposition of the preconditioned conjugate gradient PCG method and to [18, 34] for a survey of generalized and truncated gradient methods for nonsymmetric and indefinite matrix problems.

The advantage with conjugate gradient methods is that they are self adaptive; the optimal parameters are calculated by the algorithm so that the error in energy norm is minimized. This applies to a problem where and are symmetric and positive definite (SPD) or, more generally, if is similarly equivalent to an SPD matrix. Hence, there is no need to know any bounds for the spectrum. Since the method converges at least as fast as the Chebyshev method it follows that , if

We describe now the conjugate gradient method. Thereby we follow the presentations in [29, 35].

##### 3.2. The Preconditioned Conjugate Gradient Method

During the past 40 years or so, the preconditioned conjugate gradient method has become the major iterative solution method for linear systems of algebraic equations, in particular those arising in science and engineering. The author of these notes became interested in the method by the beginning of 1970 (cf. [18]).

The conjugate gradient algorithm to solve a system of linear equations the , where is symmetric and positive definite, was originally introduced by Hestenes and Stiefel [25] in 1950. Before we discuss the properties of the CG method, we describe its implementation. Namely, the algorithm consists of the steps in Algorithm 1.

What one sees from a first glance is that the CG algorithm is quite simple. Each iteration consists of one matrix-vector multiplication, two vector updates and two scalar products. Apart from the initial guess (which can be taken to be the zero vector) and stopping tolerance, there are no other method parameters to be determined or tuned by the user. Thus, the method is easily programmable, cheap in terms of arithmetic operations and performs as a black box.

For some problems the standard (unpreconditioned) CG method performs impressively well and this can be explained by some particular properties of this powerful algorithm.

The CG method is best described as a method to minimize the quadratic functional over a set of vectors. If is nonsingular, then we can rewrite in the form so, minimizing the quadratic functional is equivalent to solving the system . If is singular and in (35) is replaced by a generalized inverse of , then the above equivalence still holds if the minimization takes place on a subspace in the orthogonal complement to the null-space of .

Given an initial approximation and the corresponding residual , the minimization in the conjugate gradient method takes place successively on a subspaceof growing dimension. This subspace is referred to as the *Krylov set. *

In the derivation of the algorithm, the next approximate solution is constructed aswhere is chosen

which minimizes the function , . Also, the gradient of at is made orthogonal to the search direction . This is seen from the following relations:

As in Fourier type minimization methods, it turns out to be efficient to work with orthogonal (-orthogonal) search directions which, since is symmetric, can be determined from a three-term recursion

or equivalently, from

This recursive choice of search directions is done so that at each step the solution has smallest error in the -norm, , where is the iteration error. As mentioned, the minimization takes place over the set of (Krylov) vectors and, as is readily seen

To summarize, the CG method possesses the following remarkable properties.

Theorem 5. *Let the CG Algorithm 1 be applied to a symmetric positive definite matrix . Then, in exact arithmetic the following properties hold: *(1)*the iteratively constructed residuals are mutually orthogonal, that is, , ;*(2)*the search directions are -orthogonal (or conjugate), that is, , : *(3)*as long as the method has not converged, that is, , the algorithm proceeds with no breakdown and (42) holds; *(4)*as long as the method has not converged, the newly constructed approximation is the unique point in that minimizes , *(5)*the convergence is monotone in -norm, that is, and will be achieved for some .*

For a proof of the above theorem consult, for instance, [29] or [6].

Since the method is optimal, that is it gives the smallest error on a subspace of growing dimension, it *terminates* with the exact solution (ignoring round-off errors) in at most steps (the dimension of the whole vector space ). In fact, it can be readily seen that the CG algorithm terminates after steps, where is the degree of the minimal polynomial to with respect to the initial residual vector, in other words, has the smallest degree of all polynomials for which . Therefore, the CG method can be viewed also as a direct solution method. However, in practice we want convergence to occur to an acceptable accuracy in much fewer steps than or . Thus, we use CG as an iterative method.

For further discussions of the CG methods, see [6, 34, 35].

When one experiments with CG to solve systems with various matrices one observes some phenomena which need special attention. This can be illustrated by a simple example.

Consider the solution of by the standard conjugate gradient, where

The exact solution is . Starting with one finds that after iterationsfor and . Hence, the information travels one step at a time from left to right and it takes steps before the last component has changed at all. The algorithm converges exactly in steps and terminates due to the final recurrence property of the method.

Another detail one observes is that the norm of the error, , can be much larger than the norm of the iteratively computed residual.

These examples illustrate the fact that although the method has an optimal order of convergence rate in the *energy norm*, its actual convergence rate in spectral norm can be different and depends both on the distribution of the eigenvalues of the (preconditioned) system matrix and on the initial approximation (or residual). (For comparison, we note that the rate of convergence for steepest descent depends only on the ratio of the extremal eigenvalues of .) Faster convergence for the CG method is expected when the eigenvalues are clustered.

One way to get a better eigenvalue distribution is to precondition by a proper preconditioner . Hence, in order to achieve a better eigenvalue distribution it is crucial in practice to use some form of preconditioning, that is, a matrix which approximates in some sense, which is relatively cheap to solve systems with and for which the spectrum of (equivalently if is s.p.d.) is more favorable for the convergence of the CG method. As it turns out, if is symmetric and positive definite, the corresponding preconditioned version, the PCG method, is best derived by replacing the inner product with . It takes the following form, see Algorithm 2.

Here, denotes the action of , that is, one does not multiply with the inverse matrix , but normally solves a linear system with matrix .

In order to understand what is wanted of a *good* preconditioning matrix, we discuss first some issues of major importance related to the rate of convergence of the CG method. Thereby it becomes clear that the standard spectral condition number is often too simple to explain the detailed convergence behaviour. In particular we discuss the sub- and superlinear convergence phases frequently observed in the convergence history of the conjugate gradient method.

A preconditioner can be applied in two different manners, namely, as or . The first form implies the necessity to solve a system with at each iteration step while the second form implies a matrix-vector multiplication with (a *multiplicative preconditioner*). In the latter, case can be seen as an approximate inverse of . One can also use a hybrid form .

The presentation here is limited to symmetric positive semidefinite matrices. It is based mainly on the articles [29, 31].

##### 3.3. On the Rate of Convergence Estimates of the Conjugate Gradient Method

Let be symmetric, positive semidefinite and consider the solution of by a preconditioned conjugate gradient method. In order to understand how an efficient preconditioner to should be chosen we must first understand some general properties of the rate of convergence of conjugate gradient methods.

###### 3.3.1. Rate of Convergence Estimates Based on Minimax Approximation

As is well known (see e.g., [6, 30]), the conjugate gradient method is a norm minimizing method. For the preconditioned standard CG method, we have where , is the iteration error and denotes the set of polynomials of degree which are normalized at the origin, that is, . This is a norm on the subspace orthogonal to the mullspace of , that is, on the whole space, if is nonsingular.

Consider the -innerproduct , and note that is symmetric with respect to this innerproduct, let be orthonormal eigenvectors and let , be the corresponding eigenvalues of . Letbe the eigenvector expansion of the initial vector where , . Note further that the eigenvectors are both - and -orthogonal. Then, by the construction of the CG method, it follows

and, using the nonnegativity of the eigenvalues, we find Here we have used the -orthogonality of the eigenvectors. Similarly, using the -orthogonality, we find

Due to the minimization property (45) there follows from (48) the familiar bound Estimate (50) is sharp in the respect that for every there exists an initial vector for which equality is attained. In fact, for such a vector we necessarily have that if and only if belongs to a set of points ( the so-called Haar condition) where is taken. For such an initial vector (49) shows that, if the eigenvalues are positive, we have also

The rate of convergence of the iteration error is measured by the average convergence factor

Inequality (50) shows that this can be majorized with an estimate of the rate of convergence of a best polynomial approximation problem (namely the best approximation of the function , of polynomials in ) in maximum norm on the discrete set formed by the spectrum of . Clearly, multiple eigenvalues are treated as single so the actual approximation problem is where the disjoint positive eigenvalues have been ordered in increasing value, , and is the number of such eigenvalues. However, the solution of this problem requires knowledge of the spectrum, which is not available in general. Even if it is known, the estimate (53) can be troublesome in practice, since it involves approximation on a general discrete set of points.

Besides being costly to apply, such estimates do not give any qualitative insight in the behaviour of the conjugate gradient method for various typical eigenvalue distributions.

That is why we make some further assumptions on the spectrum in order to simplify the approximation problem and at the same time, present estimates which can be used both to estimate the number of iterations and to give some insight in the qualitative behaviour of the iteration method.

###### 3.3.2. Standard Condition Number: Linear Convergence

If the eigenvalues are (densely) located in an interval where , we can majorize the best approximation problem on the discrete set with the best approximation problem on the interval and frequently still get a good estimate. We have

The solution to this problem is well known and uses Chebyshev polynomials. One finds that where , and , , . Hence, the average rate of convergence of the upper bound approaches as . Also, it is readily found (see [35]) that the relative iteration error if Here denotes the smallest integer not less than .

It turns out that the above holds more generally if is nonsymmetric but the eigenvalues are contained in an ellipse with foci , , where , if one replaces with , where is the eccentricity of the ellipse, (i.e., the ratio of the semiaxes) and . Also, in a similar way, the case of eigenvalues contained in two separate intervals or ellipses can be analysed, see, for example, [35] for further details.

When , , and the following upper bound becomes an increasingly accurate replacement of (56),
The above estimate of the rate of convergence and of the number of iterations show that they depend only on the condition number and on the eccentricity of the ellipse, containing the eigenvalues. Therefore, except in special cases, this estimate is not very accurate. When we use a more detailed information of the spectrum and the initial error vector, sometimes substantially better estimates can be derived. This holds for instance when there are well separated small and/or large eigenvalues. Before we consider this important case, we mention briefly another similar minimax result which holds when we use *different norms * for the iteration error vector and for the initial vector.

By (48), we have

If the initial vector is such that Fourier coefficients for the highest eigenvalue modes are dominating, then may exist and take not too large values even for some . We consider the interesting case where , for which the following theorem holds (see [6, 36]).

Theorem 6. *Let denote the set of polynomials of degree k such that . Then for and for any such that is an integer, it holds
*

*Remark 1. * For it holds
for and for , it holds
for where and are the Chebyshev polynomials of degree of the first and second kind, respectively.

For other values (59) is an upper bound only, that is, not sharp. At any rate, it shows that the error converges (initially) at least as fast as , that is, as for and as for .

Note that this convergence rate does not depend on the eigenvalues, in particular not on the spectral condition number.

Conclusion. * By computing the initial approximation vector from a coarse mesh, the components for for the first Fourier modes will be small and may take on values that are not very large even when or bigger. Therefore, there is an initial decay of the residual as , independent of the condition number. Note, however, that the actual errors may not have decayed sufficiently even if the residual has.*

We consider now upper bound estimates which show how the convergence history may enter a superlinear rate.

*An Estimate to Show a Superlinear Convergence Rate Based on the K-Condition Number*

A somewhat rough, but simple and illustrative superlinear convergence estimate can be obtained in terms of the so called -condition number, (see [37, 38])
where we assume that is s.p.d.

Note that equals the quotient between the arithmetic and geometric averages of the eigenvalues. This quantity is similar to the spectral condition number in that it is never smaller than 1, and is equal to 1 if and only if , (recall that is symmetrizable).

Based on the -condition number, a superlinear convergence result can be obtained as follows.

Theorem 7. *Let be even and . Then,
*

*Proof. *Let and the polynomial be of a simplest possible form, that is, let it vanish at the smallest and largest eigenvalues of . As follows from (48), we have then
The latter follows from with . Using now twice the inequality between the arithmetic and geometric mean values, one has
Using exp , , we get the required estimate,

A somewhat better result of the same type exists. The estimate is of similar type, that is, where was obtained using more complicated techniques, see [6, 37], and the references quoted therein. Note that here as , we havethat is, the simpler upper bound in (63) is asymptotically worse than this bound (albeit in a different norm) by the factor .

The upper bounds in the above estimates involve a convergence factor which decreases with increasing iteration number and show hence a superlinear rate of convergence. Note, however, that (see [6]) where is the spectral condition number, so the -condition number may take very large values when is large.

The estimates based on the -condition number involve only “integral” characteristics of the preconditioned matrix (the trace and the determinant). Sometimes, it is possible to obtain a practical estimate of which can be useful for the a priori construction of good preconditioners and for the a posteriori assessment of their quality, see Section 5 for further details.

The estimate

shows that

or

which holds if Hence, when the estimated number of iterations depends essentially only on , that is, depends little on the relative accuracy , which indicates a fast superlinear convergence, when .

When actually estimating the number of iterations, Theorem 6 shows a useful result only when , that is, the quotient between the arithmetic and geometric averages of the eigenvalues, which equals , must be close to unity and the eigenvalues must be very well clustured so for some ; otherwise the estimated number of iterations will be , which is normally a useless result. The next example illustrates this further.

*Example 1. * Consider a geometric distribution of eigenvalues of , , for some positive . Here, asymptotically
Using Stirling's formula, we find
so,
Hence, must be sufficiently small for the estimate in Theorem 6. to be useful. On the other hand, the spectral condition number (i) , and the simple estimate based on leads to and gives hence, asymptotically, a smaller upper bound when . For further discussions on superlinear rate of convergence, see [39].

##### 3.4. Generalized Conjugate Gradient Methods

The rate of convergence estimates as given above, holds for a restricted class of matrices, symmetric or, more generally, for normal matrices.

To handle more general classes of problems for which such optimal rate of convergence results as in (45) holds, one needs more involved methods. Much work has been devoted to this problem. This includes methods like generalized minimum residual (GMRES), see Saad and Schultz [40], generalized conjugate residual (GCR), and generalized conjugate gradient (GCG), see [6] and for further details [41]. As opposed to the standard conjugate and gradient method, they require a long version of updates for the search directions, as the newest search direction at each stage is not in general, automatically (in exact precision) orthogonal to the previous search directions, but must be orthogonalized at each step. This makes the computational expense per step grow linearly and the total expense grows quadratically with the iteration index. In addition, due to finite precision, there is a tendency of loss of orthogonality, even for symmetric problems when many iterations are required. One remedy which has been suggested is to use the method only for a few steps, say 10, and restart the method with the current approximation as initial approximation.

Clearly, however, in this way, the optimal convergence property of the whole Krylov set of vector is lost. For this and other possibilities, see, for example, [42].

Another important version of the generalized conjugate gradient methods occurs when one uses variable preconditioners. Variable preconditioners, that is, a preconditioner changed from one iteration to the next iteration step, are used in many contexts.

For instance, one can use variable drop tolerance, computed adaptively, in an incomplete factorization method (see Section 4). When the given matrix is partitioned in two by two blocks, it can be efficient to use inner iterations when solving arising systems for one, or both, of the diagonal block matrices, see, for example, [43], and the flexible conjugate gradient method in Saad, [44, 45].

Due to space limitations, the above topics will not be further discussed in this paper.

#### 4. Incomplete Factorization Methods

There exist two classes of preconditioning methods that are closely related to direct solution methods. In this paper, we make a survey only of their main ingredients, but delete many of the particular aspects.

The first method is based on incomplete factorization were some entries arising during a triangular factorization are neglected to save in memory. The deletion can be based on some drop tolerance criterion or on a normally a priori, chosen sparsity pattern. The factorization based on a drop tolerance takes the following form. During the elimination (or equivalently, triangular factorization), the off-diagonal entries are accepted only if they are not too small. For instance, Here, , is the drop-tolerance parameter. Such methods may lead to too much fill-in (i.e., in positions where the original entry was occupied by a zero), because to be robust, they may require near machine-precision drop tolerances. Furthermore, as direct solution methods, they are difficult to parallelize efficiently.

The incomplete factorization method can readily be extended to matrices partitioned in block form. Often, instead of a drop tolerance, one prescribes the sparsity pattern of the triangular factors in the computed preconditioner, that is, entries arising outside the chosen pattern are ignored. An early presentation of such incomplete factorization methods was given by Meijerink and van der Vorst [46]. One can make a diagonal compensation of the neglected entries, that is add them to the diagonal entries in the same row, possibly first multiplied by some scalar ,. For discussions of such approaches, see [29, 30, 47, 48]. This frequently moves small eigenvalues, corresponding to the smoother harmonics, to cluster near the origin, in this way sometimes improving the spectral condition number by an order of magnitude (see [6, 47]).

The other class of methods are based on approximate inverses , for instance such that minimizes a Frobenius norm of the error matrix , see Section 5 for further details. To be sufficiently accurate these methods lead frequently to nearly full matrices. This can be understood as the matrices we want to approximate are often sparse discretizations of diffusion problems. The inverse of such an operator is a discrete Green's function which, as wellknown, often has a significantly sized support on nearly the whole domain of definition.

However, we can use an additive approximation of the inverse involving two, or more, terms which is approximate on different vector subspaces. By defining in this way the preconditioner recursively on a sequence of lower dimensional subspaces, it may preserve the accurate approximation property of the full, inverse method while still needing only actions of sparse operators.

Frequently, the given matrices are partitioned in a natural way in a two by two block form. For such matrices, it can be seen that the two approaches are similar. Consider namely where we assume that and the Schur complement matrix are nonsingular. (This holds, in particular, if is symmetric and positive definite.) We can construct either a block approximate factorization of or approximate the inverse of on additive form. As the following shows, the approaches are related. First, a block matrix factorization of is where , denote the unit matrices of proper order. For its inverse, it holds or A straightforward computation reveals that and, hence, where

Let be an approximation of ( for which linear systems are simpler to solve than for ) and let be a sparse approximate inverse. Possibly . Then, is a preconditioner to and is an approximate inverse, where and is an approximation of . If , where , then where . If , then in this case Hence, a convergence estimate for one method can be directly applied for the other method as well. For further discussions of block matrix preconditioners, see, for example, [49–52]. As can be seen from the above, Schur complement matrices play a major role in both matrix factorizations. For sparse approximations of Schur complement matrix, in particular element, e.g., element type approximations, see, for example [53–55].

We consider now multilevel extensions of the additive approximate inverse subspace correction method. It is illustrative to consider first the exact inverse and its relation to Gaussian (block matrix) elimination.

##### 4.1. The Exact Inverse on Additive Form

Let then and consider a matrix in a sequence defined by where each in nonsingular, being a block diagonal of a symmetric positive definite matrix. Hence, the following recursion holds, Here, is the identity matrix on level