Journal of Applied Mathematics

Journal of Applied Mathematics / 2013 / Article

Research Article | Open Access

Volume 2013 |Article ID 408167 | 16 pages | https://doi.org/10.1155/2013/408167

Exploiting the Composite Step Strategy to the Biconjugate -Orthogonal Residual Method for Non-Hermitian Linear Systems

Academic Editor: Zhongxiao Jia
Received15 Oct 2012
Accepted19 Dec 2012
Published20 Feb 2013

Abstract

The Biconjugate -Orthogonal Residual (BiCOR) method carried out in finite precision arithmetic by means of the biconjugate -orthonormalization procedure may possibly tend to suffer from two sources of numerical instability, known as two kinds of breakdowns, similarly to those of the Biconjugate Gradient (BCG) method. This paper naturally exploits the composite step strategy employed in the development of the composite step BCG (CSBCG) method into the BiCOR method to cure one of the breakdowns called as pivot breakdown. Analogously to the CSBCG method, the resulting interesting variant, with only a minor modification to the usual implementation of the BiCOR method, is able to avoid near pivot breakdowns and compute all the well-defined BiCOR iterates stably on the assumption that the underlying biconjugate -orthonormalization procedure does not break down. Another benefit acquired is that it seems to be a viable algorithm providing some further practically desired smoothing of the convergence history of the norm of the residuals, which is justified by numerical experiments. In addition, the exhibited method inherits the promising advantages of the empirically observed stability and fast convergence rate of the BiCOR method over the BCG method so that it outperforms the CSBCG method to some extent.

1. Introduction

The computational cost of many simulations with integral equations or partial differential equations that model the process is dominated by the solution of systems of linear equations of the form . The field of iterative methods for solving linear systems has been observed as an explosion of activity spurred by demand due to extraordinary technological advances in engineering and sciences in the past half century [1]. Krylov subspace methods belong to one of the most widespread and extensively accepted techniques for iterative solution of today’s large-scale linear systems [2]. With respect to “the greatest influence on the development and practice of science and engineering in the 20th century” as written by Dongarra and Sullivan [3], Krylov subspace methods are considered as one of the “Top Ten Algorithms of the Century.”

Many advances in the development of Krylov subspace methods have been inspired and made by the study of even more effective approaches to linear systems. Various methods differ in the way they extract information from Krylov spaces. A basis for the underlying Krylov subspace to form iterative recurrences involved is usually constructed based on the state-of-the-art Arnoldi orthogonalization procedure or Lanczos biorthogonalization procedure; see recent excellent and thorough review articles by Simoncini and Szyld [2] and by Philippe and Reichel [4] or monographs such as those of Greenbaum [5] and Saad [6]. A novel Lanczos-type biconjugate -orthonormalization procedure has recently been established to give birth to a new family of efficient short-recurrence methods for large real nonsymmetric and complex non-Hermitian systems of linear equations, named as the Lanczos biconjugate -orthonormalization methods [7].

As observed from numerous numerical experiments carried out with the Lanczos biconjugate -orthonormalization methods, it has been numerically demonstrated that this family of solvers shows competitive convergence properties, is cheap in memory as it is derived from short-term vector recurrences, is parameter-free, and does not require a symmetric preconditioner like other methods, if is symmetric indefinite [810]. The efficiency of the BiCOR method is highlighted by the performance profiles on a set of 14 sparse matrix problems of size up to 1.5 M unknowns shown in [10].

On the other hand, this family of solvers is often faced with apparently irregular convergence behaviors appearing as “spikes” in the convergence history of the norm of the residuals, possibly leading to substantial build-up of rounding errors and worse approximate solutions, or possibly even overflow [7, 9]. Therefore, it is quite necessary to tackle their irregular convergence properties to obtain more stabilized variants so as to improve the accuracy of the desired numerical physical solutions. It is emphasized that our main attention in this paper is focused on the straightforward natural enhancement of the Biconjugate -Orthogonal Residual (BiCOR) method, which is the basic underlying variant of the Lanczos biconjugate -Orthonormalization methods [7]. The improvement of the BiCOR method will simultaneously result in analogous improvements for the other two evolving variants known as the Conjugate -orthogonal Residual Squared (CORS) method and the Biconjugate -Orthogonal Residual Stabilized (BiCORSTAB) method.

By comparison of the biconjugate -orthonormalization procedure [7] and the Lanczos biorthogonalization procedure [6], respectively, for the BiCOR method [7] and the Biconjugate Gradient (BCG) method [11] as well as their corresponding implementation algorithms, it is obviously observed that when carried out in finite precision arithmetic, the BiCOR method does tend to suffer from two possible sources of numerical instability, known as two kinds of breakdowns, exactly similarly to those of the BCG method. We still call such two kinds of breakdowns as Lanczos breakdown and pivot breakdown, which will be analyzed and discussed in the following sections. As far as reported, a large number of strategies designed to handle such two kinds of breakdowns have been proposed in the literature [1216], including composite step techniques [11, 1721] for pivot breakdown and look-ahead techniques [2230] for Lanczos breakdown or both types. For comprehensive reviews and discussions about the two kinds of breakdowns mentioned above, refer to [2, 6, 16] and the references therein. As shown and analyzed detailedly by Bank and Chan [18, 19], the composite step strategy employed in the development of the composite step biconjugate gradient method (CSBCG) can eliminate (near) pivot breakdowns of the BCG method caused by (near) singularity of principal submatrices of the tridiagonal matrix generated by the underlying Lanczos biorthogonalization procedure, assuming that Lanczos breakdowns do not occur. Furthermore, besides simplifying implementations a great deal in comparison with those existing look-ahead techniques, the composite step strategy used there is able to provide some smoothing of the convergence history of the norm of the residuals without involving any arbitrary machine-dependent or user-supplied tolerances.

This paper revisits the composite step strategy taken for the CSBCG method [18, 19] to naturally stabilize the convergence performance of the BiCOR method from the pivot-breakdown treatment point of view. Therefore, suppose that the underlying biconjugate -orthonormalization procedure does not break down; that is to assume that there is no occurrence of the so-called Lanczos breakdown encountered during algorithm implementations throughout. The resulting interesting algorithm, given a name as the composite step BiCOR (CSBiCOR) method, could be also considered as a relatively simple modification of the regular BiCOR method.

The main objectives are twofold. First, the CSBiCOR method is devised to be able to avoid near pivot breakdowns and compute all the well-defined BiCOR iterates stably with only minor modifications inspired by the advantages of the CSBCG method over the BCG method. Second, the CSBiCOR method can reduce the number of spikes in the convergence history of the norm of the residuals to the greatest extent, providing some further practically desired smoothing behavior towards stabilizing the behavior of the BiCOR method when it has erratic convergence behaviors. Additional purpose is that the CSBiCOR method inherits the promising advantages of the empirically observed stability and fast convergence rate of the BiCOR method over the BCG method so that it outperforms the CSBCG method to some extent.

The remainder of this work is organized as follows. A brief review of the biconjugate -orthonormalization procedure is made and the factorization of the resulted nonsingular tridiagonal matrices in the aforementioned procedure is considered in Section 2 for the preparation of the theoretical basis and relevant background of the CSBiCOR method. And then the breakdowns of the BiCOR method are presented in Section 3 while the CSBiCOR method is derived in Section 4. Section 5 analyzes the properties of the CSBiCOR method as well as providing a best approximation result for the convergence of the CSBiCOR method. In order to conveniently and simply present the CSBiCOR method, some implementation issues on both algorithm simplification and stepping strategy are discussed detailedly in Section 6. The improved performance of the CSBiCOR method will be illustrated in Section 7 in the perspective that the CSBiCOR method could hopefully smooth the convergence history through the reduction of the number of spikes when the BiCOR method has irregular convergence behavior. Finally, concluding remarks are made in the last section.

Throughout the paper, we denote the overbar “–” the conjugate complex of a scalar, vector or matrix and the superscript “” the transpose of a vector, or matrix. For a non-Hermitian matrix , the Hermitian conjugate of is denoted as The standard Hermitian inner product of two complex vectors is defined as The nested Krylov subspace of dimension generated by from is of the form In addition, denotes the th column of the appropriate identity matrix.

When it will be helpful, we will use the word “ideally” (or “mathematically”) to refer to a result that could hold in exact arithmetic ignoring effects of rounding errors, and “numerically” (or “computationally”) to a result of a finite precision computation.

2. Theoretical Basis for the CSBiCOR Method

For the sake of discussing the theoretical basis and relevant background of the CSBiCOR method, the biconjugate -orthonormalization procedure [7] is first briefly recalled as in Algorithm 1, which can ideally build up a pair of biconjugate -orthonormal bases for the dual Krylov subspaces and , where and are chosen initially to satisfy certain conditions.

(1) Choose , , such that
(2) Set ,
(3) for     do
(4)  
(5)  
(6)  
(7)  
(8)  
(9)  
(10)
(11) end for

Observe that the above algorithm is possible to have Lanczos-type breakdown whenever vanishes while and are not equal to appearing in line 8. In the interest of counteraction against such breakdowns, refer oneself to remedies such as the so-called look-ahead strategies [2230] which can enhance stability while increasing cost modestly. But that is outside the scope of this paper and we will not pursue that here. For more details, please refer to [2, 6] and the references therein. Throughout the rest of the present paper, suppose there is no such Lanczos-type breakdown encountered during algorithm implementations because most of our considerations concern the exploration of the composite step strategy [18, 19] to handle the pivot breakdown occurring in the BiCOR method for solving non-Hermitian linear systems.

Next, some properties of the vectors produced by Algorithm 1 are reviewed [7] in the following proposition for the preparation of the theoretical basis of the composite step method.

Proposition 1. If Algorithm 1 proceeds steps, then the right and left Lanczos-type vectors and form a biconjugate -orthonormal system in exact arithmetic, that is,
Furthermore, denote by and the matrices and by the extended tridiagonal matrix of the form where whose entries are the coefficients generated during the algorithm implementation, and in which are complex while are positive. Then with the biconjugate -orthonormalization procedure, the following four relations hold:

It is well known that the Lanczos biorthogonalization procedure can mathematically build up a pair of biorthogonal bases for the dual Krylov subspaces and , where and are initially selected such that [6]. While the biconjugate -orthonormalization procedure presented as in Algorithm 1 can ideally build up a pair of biconjugate -orthonormal bases for the dual Krylov subspaces and , where and are chosen initially to satisfy . Keeping in mind the above differences in terms of different Krylov subspace bases constructed by the two different underlying procedures, we will in parallel borrow some of the results of the CSBCG method [18] for establishing the following counterparts for the CSBiCOR method; see [18] for relevant proofs.

It should be emphasized that the present CSBiCOR method will be derived in exactly the same way the CSBCG method is obtained [19]. Hence, the analysis and theoretical background for the CSBiCOR method in this section are the counterparts of the CSBCG method in a different context, that is, the biconjugate -orthonormalization procedure as presented in Algorithm 1. They are included as follows for the purpose of theoretical completeness and can be proved in the same way for the corresponding results of the CSBCG method analyzed in [18].

The following proposition states the tridiagonal matrix formed in Proposition 1 cannot have two successive singular leading principal submatrices if the coefficient matrix is nonsingular.

Proposition 2. Let be the upper left principal submatrices of the nonsingular tridiagonal matrix appeared as in Proposition 1. Then and cannot be both singular.

It is noted that there may exist possible breakdown in the factorization without pivoting of the tridiagonal matrix formed in Proposition 1. The following results illustrate how to correct such problem with the occasional use of block pivots.

Proposition 3. Let be the nonsingular tridiagonal matrix formed in Proposition 1. Then can be factored as where is unit lower block bidiagonal, is unit upper block bidiagonal, and is block diagonal, with and diagonal blocks.

With the above two propositions, one can prove the following result insuring that if the upper left principal submatrices of is singular, it can still be factored.

Corollary 4. Suppose one upper left principal submatrix of the nonsingular tridiagonal matrix formed in Proposition 1 is singular, but must be nonsingular as stated in Proposition 2. Then where is unit lower block bidiagonal, is unit upper block bidiagonal, and is block diagonal, with and diagonal blocks, and in particular, the last block of is the zero matrix.

From Corollary 4, the singularity of can be recognized by only examining the last diagonal element (and next potential pivot) . If , then the block will be nonsingular and can be used as a pivot, where , and are the corresponding elements of as shown in Proposition 1.

It can be concluded that appropriate combinations of and steps for the BiCOR method can skip over the breakdowns caused by the singular principal submatrices of the tridiagonal matrix generated by the underlying biconjugate -orthonormalization procedure presented as in Algorithm 1 since the BiCOR method pivots implicitly with those submatrices. The next section will have a detailed investigation of breakdowns possibly occurring in the BiCOR method.

3. Breakdowns of the BiCOR Method

Given an initial guess to the non-Hermitian linear system associated with the initial residual , define a Krylov subspace , where is defined in Proposition 1, and is chosen arbitrarily such that . But is often chosen to be equal to subjecting to . It is worth noting that this choice for plays a significant role in establishing the empirically observed superiority of the BiCOR method to the BiCR [31] method as well as to the BCG method [7]. Thus running Algorithm 1   steps, we can seek an th approximate solution from the affine subspace of dimension , by imposing the Petrov-Galerkin condition which can be mathematically written in matrix formulation as

Analogously, an th dual approximation of the corresponding dual system is sought from the affine subspace of dimension by satisfying which can be mathematically written in matrix formulation as where is an initial dual approximate solution and is defined in Proposition 1 with .

Consequently, the BiCOR iterates ’s can be computed by the coming Algorithm 2, which is just the unpreconditioned BiCOR method with the preconditioner there taken as the identity matrix [7] and has been rewritten with the algorithmic scheme of the unpreconditioned BCG method as presented in [6, 19].

(1) Compute for some initial guess .
(2) Choose such that , where is a polynomial in .
   (e.g., ).
(3) Set , , , , , .
(4) for     do
(5)  
(6)  
(7)  
(8)  
(9)  
(10)
(11)
(12)
(13) if   , method fails
(14)
(15)
(16)
(17)
(18)
(19) check convergence; continue if necessary
(20) end for

Suppose Algorithm 2 runs successfully to step , that is, , , . The BiCOR iterates satisfy the following properties [7].

Proposition 5. Let , and , . One has the following.(1), .(2)is diagonal.(3)is diagonal.

Similarly to the breakdowns of the BCG method [19], it is observed from Algorithm 2 that there also exist two possible kinds of breakdowns for the BiCOR method:(1) but and are not equal to appearing in line 14;(2) appearing in line 6.

Although the computational formulae for the quantities where the breakdowns reside are different between the BiCOR method and the BCG method, we do not have a better name for them. And therefore, we still call the two cases of breakdowns described above as Lanczos breakdown and pivot breakdown, respectively.

The Lanczos breakdown can be cured using look-ahead techniques [2230] as mentioned in the first section, but such techniques require a careful and sophisticated way so as to make them become necessarily quite complicated to apply. This aspect of applying look-ahead techniques to the BiCOR method demands further research.

In this paper, we attempt to resort to the composite step idea employed for the CSBCG method [18, 19] to handle the pivot breakdown of the BiCOR method with the assumption that the underlying biconjugate -orthonormalization procedure depicted as in Algorithm 1 does not break down; that is the situation where while .

4. The Composite Step BiCOR Method

Suppose Algorithm 2 comes across a situation where after successful algorithm implementation up to step with the assumption that , which indicates that the updates of are not well defined. Taking the composite step idea, we will avoid division by via skipping this th update and exploiting a composite step update to directly obtain the quantities in step with scaled versions of and as well as with the previous primary search direction vector and shadow search direction vector . The following process for deriving the CSBiCOR method is the same as that of the derivation of the CSBCG method [19] except for the different underlying procedures involved to correspondingly generate different Krylov subspace bases.

Analogously, define auxiliary vectors and as follows: which are then used to look for the iterates and in step as follows: where . Correspondingly, the th primary residual and shadow residual are, respectively, computed as The biconjugate -orthogonality condition between the BiCOR primary residuals and shadow residuals shown as Property (2) in Proposition 5 requires combining with (21) and (22) gives rise to the following two systems of linear equations for, respectively, solving and

Similarly, the th primary search direction vector and shadow search direction vector in a composite step are computed with the following form: where .

The biconjugate -orthogonality condition between the BiCOR primary search direction vectors and shadow search direction vectors shown as Property (3) in Proposition 5 requires combining with (26) and (27) results in the following two systems of linear equations for respectively solving and

Therefore, it could be able to advance from step to step to provide , , , , , by solving the above four linear systems represented as in (24), (25), (29), and (30). With an appropriate combination of and steps, the CSBiCOR method can be simply obtained with only a minor modification to the usual implementation of the BiCOR method. Before explicitly presenting the algorithm, some properties of the CSBiCOR method will be given in the following section.

5. Some Properties of the CSBiCOR Method

In order to show the use of steps is sufficient for CSBiCOR method to compute exactly all those well-defined BiCOR iterates stably provided that the underlying biconjugate -orthonormalization procedure depicted as in Algorithm 1 does not break down, it is necessary to establish some lemmas similar to those for the CSBCG method [19].

Lemma 6. Using either a or a step to obtain , , the following hold:

Proof. If are obtained via a step, that is, , as shown in lines 15 and 16 of Algorithm 2, then with the biconjugate -orthogonality condition between the BiCOR primary search direction vectors and shadow search direction vectors shown as Property (3) in Proposition 5, it follows that
If are obtained via a step, that is, , as presented in (26) and (27), then according to the biconjugate -orthogonality conditions mentioned above for deriving with a step, it follows that

With the above lemma, we can show the relationships among the four coefficient matrices of the linear systems represented as in (24), (25), (29), and (30).

Lemma 7. The four coefficient matrices of the linear systems represented as in (24), (25), (29), and (30) have the following properties and relationships.(1)The coefficient matrices in (24) and (29) are identical, so are those matrices in (25) and (30).(2)All the coefficient matrices involved are symmetric.(3)The coefficient matrices in (24) and (25) (correspondingly in (29) and (30)) are Hermitian to each other.

Proof. The first relation is obvious by observation of the corresponding coefficient matrices in (24) and (29) (correspondingly in (25) and (30)).
For proving the second property, it suffices to show . From the presentations of and as respectively defined in (18) and (19), and following from Lemma 6, we have
The third fact directly comes from the Hermitian property of the standard Hermitian inner product of two complex vectors defined at the end of the first section; see, for instance, [6].

It is noticed that similar Hermitian relationships also hold for the corresponding matriced involved in (1), (2), (3) and (4) in [19] when the CSBCG method is applied in complex cases.

Now, an alternative result stating that a step is always sufficient to skip over the pivot breakdown of the BiCOR method when , just as stated at the end of Section 2, can be shown in the next lemma.

Lemma 8. Suppose that the biconjugate -orthonormalization procedure depicted as in Algorithm 1 underlying the BiCOR method does not breakdown; that is the situation where , and . If , then the coefficient matrices in (24), (25), (29), and (30) are nonsingular.

Proof. It suffices to show that if , then . From Lemma 7 and (18), we have

The next proposition shows some properties of the CSBiCOR iterates similar to those of the BiCOR method presented in Proposition 5 with some minor changes.

Proposition 9. For the CSBiCOR algorithm, let , , where , are replaced by , appropriately if the th step is a composite step; and let , . Assuming the underlying biconjugate -orthonormalization procedure does not break down, one has the following properties.(1), .(2) is diagonal.(3), where is either of order or and the sum of the dimensions of ’s is .

Proof. The properties can be proved with the same frame for those of Theorem  4.4 in [19].

Therefore, we can show with the above properties that the use of steps is sufficient for the CSBiCOR method to compute exactly all those well-defined BiCOR iterates stably provided that the underlying biconjugate -orthonormalization procedure depicted as in Algorithm 1 does not break down.

Proposition 10. The CSBiCOR method is able to compute exactly all those well-defined BiCOR iterates stably without pivot breakdown assuming that the underlying biconjugate -orthonormalization procedure does not break down.

Proof. By construction, ; by induction, and ; by definition, . Then it follows that . From Proposition 9, we have . Therefore, exactly satisfies the Petrov-Galerkin condition defining the BiCOR iterates.

Before ending this section, we present a best approximation result for the CSBiCOR method, which uses the same framework as that for the CSBCG method [19]. For discussion about convergence results for Krylov subspaces methods, refer to [2, 6, 32, 33] and the references therein. To the best of our knowledge, this is the first attempt to study the convergence result for the BiCOR method [7].

First let and denote the Krylov subspaces generated by the biconjugate -orthonormalization procedure after running Algorithm 1   steps. The norms associated with the spaces and are defined as where, while and are symmetric positive definite matrices. Then we have the following proposition with the initial guess of the linear system being . The case with a nonzero initial guess can be treated adaptively.

Proposition 11. Suppose that for all and for all , one has where, is a constant independent of and . Furthermore, assuming that for those steps in the composite step Biconjugate -Orthogonal Residual (CSBiCOR) method in which we compute an approximation , we have Then,

Proof. From the Petrov-Galerkin condition (15), we have for , yielding with arbitrary It is noted that because of the assumption of the zero initial guess. Then taking the sup of both sides of the above equation for all with the inf-sup condition (38) to bound the left-hand side and the continuity assumption (37) to bound the right-hand side results in which yields with the triangle inequality. The final assertion (39) follows immediately since is arbitrary.

6. Implementation Issues

For the purpose of conveniently and simply presenting the CSBiCOR method, we first prove some relations simplifying the solutions of the four linear systems represented as in (24), (25), (29), and (30).

Lemma 12. For the four linear systems represented as in (24), (25), (29), and (30), the following relations hold:(1) and .(2) and , where .(3), where .

Proof. (1) If a step is taken, then , giving that because of the last-term vanishment due to Proposition 9. Analogously, the iteration similarly gives from Proposition 9 and the Hermitian property of the standard Hermitian inner product. If a step is taken, then by construction as shown in (27) and Proposition 9, we have = = = . Similarly, with Proposition 9 and the Hermitian property of the standard Hermitian inner product, it can be shown that = = = = . The equalities directly follow from Proposition 9. Finally, because the coefficient matrices of the symmetric systems in (24) and (25) respectively governing and are Hermitian to each other as shown in Lemma 7 while the corresponding right-hand sides are conjugate to each other as shown here, we can deduce that .
(2) From Proposition 9, it can directly verify that with the fact that and . Next, it can be noted that if , then . Otherwise, the first equation in the linear system (24) cannot hold for as proved in Lemma 8 and as shown in the above . Then by construction of (22) and Proposition 9, we have = = = = = with the fact that proved in the above . By construction of (21) and Proposition 9, we can also show that . Similarly to the relationships between and , we can deduce that , because the coefficient matrices of the symmetric systems in (29) and (30) respectively governing and are Hermitian to each other as shown in Lemma 7 while the corresponding right-hand sides are conjugate to each other as shown here.
(3) By construction of (19) and Proposition 9, it follows that = = = = , where . Because it is already known that = as proved in Lemma 7, we have = = = . The proof is completed.

As a result of Lemmas 6 and 12, when a step is taken, the solutions for and involved in the four linear systems represented as in (24), (25), (29), and (30) can be simplified only by solving the following two linear systems: where .

It should be emphasized that these above two systems have the same representations for the CSBCG method [19] but differ in detailed computational formulae for the corresponding quantities because of the different underlying procedures mentioned in Section 1. As a result, and can be explicitly represented as where .

Combining all the recurrences discussed above for either a step or a step and taking the strategy of reducing the number of matrix-vector multiplications by introducing an auxiliary vector recurrence and changing variables adopted for the BiCOR method [7] as well as for the BiCR method [31], together lead to the CSBiCOR method. The pseudocode for the preconditioned CSBiCOR with a left preconditioner can be represented by Algorithm 3, where and are used to respectively denote the unpreconditioned and the preconditioned residuals. It is obvious to note that the number of matrix-vector multiplications in this algorithm remains the same per step as in the BiCOR algorithm [7]. Therefore, the cost of the use of composite steps is negligible. That is, composite steps cost approximately twice as much as steps just as stated for the CSBCG method [18]. Thus, from the point of view of choosing or steps, there is no significant difference with respect to algorithm cost. However, the presented algorithm is with the motivation of obtaining smoother and, hopefully, faster convergence behavior in comparison with the BiCOR method besides eliminating pivot breakdowns.

(1) Compute for some initial guess .
(2) Choose such that , where is a polynomial in .
   (e.g., ). Set , , , .
(3) Compute .
(4) Begin LOOP
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12) if 1 × 1 step then
(13)  
(14)  
(15)  
(16)  
(17)  
(18)  
(19)  
(20)  
(21)  
(22)  
(23) else
(24)  
(25)