- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Applied Mathematics
Volume 2013 (2013), Article ID 408167, 16 pages
Exploiting the Composite Step Strategy to the Biconjugate -Orthogonal Residual Method for Non-Hermitian Linear Systems
1School of Mathematical Sciences, Institute of Computational Science, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China
2Institute of Mathematics and Computing Science, University of Groningen, Nijenborgh 9, P.O. Box 407, 9700 AK Groningen, The Netherlands
Received 15 October 2012; Accepted 19 December 2012
Academic Editor: Zhongxiao Jia
Copyright © 2013 Yan-Fei Jing et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The Biconjugate -Orthogonal Residual (BiCOR) method carried out in finite precision arithmetic by means of the biconjugate -orthonormalization procedure may possibly tend to suffer from two sources of numerical instability, known as two kinds of breakdowns, similarly to those of the Biconjugate Gradient (BCG) method. This paper naturally exploits the composite step strategy employed in the development of the composite step BCG (CSBCG) method into the BiCOR method to cure one of the breakdowns called as pivot breakdown. Analogously to the CSBCG method, the resulting interesting variant, with only a minor modification to the usual implementation of the BiCOR method, is able to avoid near pivot breakdowns and compute all the well-defined BiCOR iterates stably on the assumption that the underlying biconjugate -orthonormalization procedure does not break down. Another benefit acquired is that it seems to be a viable algorithm providing some further practically desired smoothing of the convergence history of the norm of the residuals, which is justified by numerical experiments. In addition, the exhibited method inherits the promising advantages of the empirically observed stability and fast convergence rate of the BiCOR method over the BCG method so that it outperforms the CSBCG method to some extent.
The computational cost of many simulations with integral equations or partial differential equations that model the process is dominated by the solution of systems of linear equations of the form . The field of iterative methods for solving linear systems has been observed as an explosion of activity spurred by demand due to extraordinary technological advances in engineering and sciences in the past half century . Krylov subspace methods belong to one of the most widespread and extensively accepted techniques for iterative solution of today’s large-scale linear systems . With respect to “the greatest influence on the development and practice of science and engineering in the 20th century” as written by Dongarra and Sullivan , Krylov subspace methods are considered as one of the “Top Ten Algorithms of the Century.”
Many advances in the development of Krylov subspace methods have been inspired and made by the study of even more effective approaches to linear systems. Various methods differ in the way they extract information from Krylov spaces. A basis for the underlying Krylov subspace to form iterative recurrences involved is usually constructed based on the state-of-the-art Arnoldi orthogonalization procedure or Lanczos biorthogonalization procedure; see recent excellent and thorough review articles by Simoncini and Szyld  and by Philippe and Reichel  or monographs such as those of Greenbaum  and Saad . A novel Lanczos-type biconjugate -orthonormalization procedure has recently been established to give birth to a new family of efficient short-recurrence methods for large real nonsymmetric and complex non-Hermitian systems of linear equations, named as the Lanczos biconjugate -orthonormalization methods .
As observed from numerous numerical experiments carried out with the Lanczos biconjugate -orthonormalization methods, it has been numerically demonstrated that this family of solvers shows competitive convergence properties, is cheap in memory as it is derived from short-term vector recurrences, is parameter-free, and does not require a symmetric preconditioner like other methods, if is symmetric indefinite [8–10]. The efficiency of the BiCOR method is highlighted by the performance profiles on a set of 14 sparse matrix problems of size up to 1.5 M unknowns shown in .
On the other hand, this family of solvers is often faced with apparently irregular convergence behaviors appearing as “spikes” in the convergence history of the norm of the residuals, possibly leading to substantial build-up of rounding errors and worse approximate solutions, or possibly even overflow [7, 9]. Therefore, it is quite necessary to tackle their irregular convergence properties to obtain more stabilized variants so as to improve the accuracy of the desired numerical physical solutions. It is emphasized that our main attention in this paper is focused on the straightforward natural enhancement of the Biconjugate -Orthogonal Residual (BiCOR) method, which is the basic underlying variant of the Lanczos biconjugate -Orthonormalization methods . The improvement of the BiCOR method will simultaneously result in analogous improvements for the other two evolving variants known as the Conjugate -orthogonal Residual Squared (CORS) method and the Biconjugate -Orthogonal Residual Stabilized (BiCORSTAB) method.
By comparison of the biconjugate -orthonormalization procedure  and the Lanczos biorthogonalization procedure , respectively, for the BiCOR method  and the Biconjugate Gradient (BCG) method  as well as their corresponding implementation algorithms, it is obviously observed that when carried out in finite precision arithmetic, the BiCOR method does tend to suffer from two possible sources of numerical instability, known as two kinds of breakdowns, exactly similarly to those of the BCG method. We still call such two kinds of breakdowns as Lanczos breakdown and pivot breakdown, which will be analyzed and discussed in the following sections. As far as reported, a large number of strategies designed to handle such two kinds of breakdowns have been proposed in the literature [12–16], including composite step techniques [11, 17–21] for pivot breakdown and look-ahead techniques [22–30] for Lanczos breakdown or both types. For comprehensive reviews and discussions about the two kinds of breakdowns mentioned above, refer to [2, 6, 16] and the references therein. As shown and analyzed detailedly by Bank and Chan [18, 19], the composite step strategy employed in the development of the composite step biconjugate gradient method (CSBCG) can eliminate (near) pivot breakdowns of the BCG method caused by (near) singularity of principal submatrices of the tridiagonal matrix generated by the underlying Lanczos biorthogonalization procedure, assuming that Lanczos breakdowns do not occur. Furthermore, besides simplifying implementations a great deal in comparison with those existing look-ahead techniques, the composite step strategy used there is able to provide some smoothing of the convergence history of the norm of the residuals without involving any arbitrary machine-dependent or user-supplied tolerances.
This paper revisits the composite step strategy taken for the CSBCG method [18, 19] to naturally stabilize the convergence performance of the BiCOR method from the pivot-breakdown treatment point of view. Therefore, suppose that the underlying biconjugate -orthonormalization procedure does not break down; that is to assume that there is no occurrence of the so-called Lanczos breakdown encountered during algorithm implementations throughout. The resulting interesting algorithm, given a name as the composite step BiCOR (CSBiCOR) method, could be also considered as a relatively simple modification of the regular BiCOR method.
The main objectives are twofold. First, the CSBiCOR method is devised to be able to avoid near pivot breakdowns and compute all the well-defined BiCOR iterates stably with only minor modifications inspired by the advantages of the CSBCG method over the BCG method. Second, the CSBiCOR method can reduce the number of spikes in the convergence history of the norm of the residuals to the greatest extent, providing some further practically desired smoothing behavior towards stabilizing the behavior of the BiCOR method when it has erratic convergence behaviors. Additional purpose is that the CSBiCOR method inherits the promising advantages of the empirically observed stability and fast convergence rate of the BiCOR method over the BCG method so that it outperforms the CSBCG method to some extent.
The remainder of this work is organized as follows. A brief review of the biconjugate -orthonormalization procedure is made and the factorization of the resulted nonsingular tridiagonal matrices in the aforementioned procedure is considered in Section 2 for the preparation of the theoretical basis and relevant background of the CSBiCOR method. And then the breakdowns of the BiCOR method are presented in Section 3 while the CSBiCOR method is derived in Section 4. Section 5 analyzes the properties of the CSBiCOR method as well as providing a best approximation result for the convergence of the CSBiCOR method. In order to conveniently and simply present the CSBiCOR method, some implementation issues on both algorithm simplification and stepping strategy are discussed detailedly in Section 6. The improved performance of the CSBiCOR method will be illustrated in Section 7 in the perspective that the CSBiCOR method could hopefully smooth the convergence history through the reduction of the number of spikes when the BiCOR method has irregular convergence behavior. Finally, concluding remarks are made in the last section.
Throughout the paper, we denote the overbar “–” the conjugate complex of a scalar, vector or matrix and the superscript “” the transpose of a vector, or matrix. For a non-Hermitian matrix , the Hermitian conjugate of is denoted as The standard Hermitian inner product of two complex vectors is defined as The nested Krylov subspace of dimension generated by from is of the form In addition, denotes the th column of the appropriate identity matrix.
When it will be helpful, we will use the word “ideally” (or “mathematically”) to refer to a result that could hold in exact arithmetic ignoring effects of rounding errors, and “numerically” (or “computationally”) to a result of a finite precision computation.
2. Theoretical Basis for the CSBiCOR Method
For the sake of discussing the theoretical basis and relevant background of the CSBiCOR method, the biconjugate -orthonormalization procedure  is first briefly recalled as in Algorithm 1, which can ideally build up a pair of biconjugate -orthonormal bases for the dual Krylov subspaces and , where and are chosen initially to satisfy certain conditions.
Observe that the above algorithm is possible to have Lanczos-type breakdown whenever vanishes while and are not equal to appearing in line 8. In the interest of counteraction against such breakdowns, refer oneself to remedies such as the so-called look-ahead strategies [22–30] which can enhance stability while increasing cost modestly. But that is outside the scope of this paper and we will not pursue that here. For more details, please refer to [2, 6] and the references therein. Throughout the rest of the present paper, suppose there is no such Lanczos-type breakdown encountered during algorithm implementations because most of our considerations concern the exploration of the composite step strategy [18, 19] to handle the pivot breakdown occurring in the BiCOR method for solving non-Hermitian linear systems.
Proposition 1. If Algorithm 1 proceeds steps, then the right and left Lanczos-type vectors and form a biconjugate -orthonormal system in exact arithmetic, that is,
Furthermore, denote by and the matrices and by the extended tridiagonal matrix of the form where whose entries are the coefficients generated during the algorithm implementation, and in which are complex while are positive. Then with the biconjugate -orthonormalization procedure, the following four relations hold:
It is well known that the Lanczos biorthogonalization procedure can mathematically build up a pair of biorthogonal bases for the dual Krylov subspaces and , where and are initially selected such that . While the biconjugate -orthonormalization procedure presented as in Algorithm 1 can ideally build up a pair of biconjugate -orthonormal bases for the dual Krylov subspaces and , where and are chosen initially to satisfy . Keeping in mind the above differences in terms of different Krylov subspace bases constructed by the two different underlying procedures, we will in parallel borrow some of the results of the CSBCG method  for establishing the following counterparts for the CSBiCOR method; see  for relevant proofs.
It should be emphasized that the present CSBiCOR method will be derived in exactly the same way the CSBCG method is obtained . Hence, the analysis and theoretical background for the CSBiCOR method in this section are the counterparts of the CSBCG method in a different context, that is, the biconjugate -orthonormalization procedure as presented in Algorithm 1. They are included as follows for the purpose of theoretical completeness and can be proved in the same way for the corresponding results of the CSBCG method analyzed in .
The following proposition states the tridiagonal matrix formed in Proposition 1 cannot have two successive singular leading principal submatrices if the coefficient matrix is nonsingular.
Proposition 2. Let be the upper left principal submatrices of the nonsingular tridiagonal matrix appeared as in Proposition 1. Then and cannot be both singular.
It is noted that there may exist possible breakdown in the factorization without pivoting of the tridiagonal matrix formed in Proposition 1. The following results illustrate how to correct such problem with the occasional use of block pivots.
Proposition 3. Let be the nonsingular tridiagonal matrix formed in Proposition 1. Then can be factored as where is unit lower block bidiagonal, is unit upper block bidiagonal, and is block diagonal, with and diagonal blocks.
With the above two propositions, one can prove the following result insuring that if the upper left principal submatrices of is singular, it can still be factored.
Corollary 4. Suppose one upper left principal submatrix of the nonsingular tridiagonal matrix formed in Proposition 1 is singular, but must be nonsingular as stated in Proposition 2. Then where is unit lower block bidiagonal, is unit upper block bidiagonal, and is block diagonal, with and diagonal blocks, and in particular, the last block of is the zero matrix.
From Corollary 4, the singularity of can be recognized by only examining the last diagonal element (and next potential pivot) . If , then the block will be nonsingular and can be used as a pivot, where , and are the corresponding elements of as shown in Proposition 1.
It can be concluded that appropriate combinations of and steps for the BiCOR method can skip over the breakdowns caused by the singular principal submatrices of the tridiagonal matrix generated by the underlying biconjugate -orthonormalization procedure presented as in Algorithm 1 since the BiCOR method pivots implicitly with those submatrices. The next section will have a detailed investigation of breakdowns possibly occurring in the BiCOR method.
3. Breakdowns of the BiCOR Method
Given an initial guess to the non-Hermitian linear system associated with the initial residual , define a Krylov subspace , where is defined in Proposition 1, and is chosen arbitrarily such that . But is often chosen to be equal to subjecting to . It is worth noting that this choice for plays a significant role in establishing the empirically observed superiority of the BiCOR method to the BiCR  method as well as to the BCG method . Thus running Algorithm 1 steps, we can seek an th approximate solution from the affine subspace of dimension , by imposing the Petrov-Galerkin condition which can be mathematically written in matrix formulation as
Analogously, an th dual approximation of the corresponding dual system is sought from the affine subspace of dimension by satisfying which can be mathematically written in matrix formulation as where is an initial dual approximate solution and is defined in Proposition 1 with .
Consequently, the BiCOR iterates ’s can be computed by the coming Algorithm 2, which is just the unpreconditioned BiCOR method with the preconditioner there taken as the identity matrix  and has been rewritten with the algorithmic scheme of the unpreconditioned BCG method as presented in [6, 19].
Proposition 5. Let , and , . One has the following.(1), .(2)is diagonal.(3)is diagonal.
Similarly to the breakdowns of the BCG method , it is observed from Algorithm 2 that there also exist two possible kinds of breakdowns for the BiCOR method:(1) but and are not equal to appearing in line 14;(2) appearing in line 6.
Although the computational formulae for the quantities where the breakdowns reside are different between the BiCOR method and the BCG method, we do not have a better name for them. And therefore, we still call the two cases of breakdowns described above as Lanczos breakdown and pivot breakdown, respectively.
The Lanczos breakdown can be cured using look-ahead techniques [22–30] as mentioned in the first section, but such techniques require a careful and sophisticated way so as to make them become necessarily quite complicated to apply. This aspect of applying look-ahead techniques to the BiCOR method demands further research.
In this paper, we attempt to resort to the composite step idea employed for the CSBCG method [18, 19] to handle the pivot breakdown of the BiCOR method with the assumption that the underlying biconjugate -orthonormalization procedure depicted as in Algorithm 1 does not break down; that is the situation where while .
4. The Composite Step BiCOR Method
Suppose Algorithm 2 comes across a situation where after successful algorithm implementation up to step with the assumption that , which indicates that the updates of are not well defined. Taking the composite step idea, we will avoid division by via skipping this th update and exploiting a composite step update to directly obtain the quantities in step with scaled versions of and as well as with the previous primary search direction vector and shadow search direction vector . The following process for deriving the CSBiCOR method is the same as that of the derivation of the CSBCG method  except for the different underlying procedures involved to correspondingly generate different Krylov subspace bases.
Analogously, define auxiliary vectors and as follows: which are then used to look for the iterates and in step as follows: where . Correspondingly, the th primary residual and shadow residual are, respectively, computed as The biconjugate -orthogonality condition between the BiCOR primary residuals and shadow residuals shown as Property (2) in Proposition 5 requires combining with (21) and (22) gives rise to the following two systems of linear equations for, respectively, solving and
Similarly, the th primary search direction vector and shadow search direction vector in a composite step are computed with the following form: where .
The biconjugate -orthogonality condition between the BiCOR primary search direction vectors and shadow search direction vectors shown as Property (3) in Proposition 5 requires combining with (26) and (27) results in the following two systems of linear equations for respectively solving and
Therefore, it could be able to advance from step to step to provide , , , , , by solving the above four linear systems represented as in (24), (25), (29), and (30). With an appropriate combination of and steps, the CSBiCOR method can be simply obtained with only a minor modification to the usual implementation of the BiCOR method. Before explicitly presenting the algorithm, some properties of the CSBiCOR method will be given in the following section.
5. Some Properties of the CSBiCOR Method
In order to show the use of steps is sufficient for CSBiCOR method to compute exactly all those well-defined BiCOR iterates stably provided that the underlying biconjugate -orthonormalization procedure depicted as in Algorithm 1 does not break down, it is necessary to establish some lemmas similar to those for the CSBCG method .
Lemma 6. Using either a or a step to obtain , , the following hold:
Proof. If are obtained via a step, that is, , as shown in lines 15 and 16 of Algorithm 2, then with the biconjugate -orthogonality condition between the BiCOR primary search direction vectors and shadow search direction vectors shown as Property (3) in Proposition 5, it follows that
If are obtained via a step, that is, , as presented in (26) and (27), then according to the biconjugate -orthogonality conditions mentioned above for deriving with a step, it follows that
Lemma 7. The four coefficient matrices of the linear systems represented as in (24), (25), (29), and (30) have the following properties and relationships.(1)The coefficient matrices in (24) and (29) are identical, so are those matrices in (25) and (30).(2)All the coefficient matrices involved are symmetric.(3)The coefficient matrices in (24) and (25) (correspondingly in (29) and (30)) are Hermitian to each other.
Proof. The first relation is obvious by observation of the corresponding coefficient matrices in (24) and (29) (correspondingly in (25) and (30)).
For proving the second property, it suffices to show . From the presentations of and as respectively defined in (18) and (19), and following from Lemma 6, we have
The third fact directly comes from the Hermitian property of the standard Hermitian inner product of two complex vectors defined at the end of the first section; see, for instance, .
It is noticed that similar Hermitian relationships also hold for the corresponding matriced involved in (1), (2), (3) and (4) in  when the CSBCG method is applied in complex cases.
Now, an alternative result stating that a step is always sufficient to skip over the pivot breakdown of the BiCOR method when , just as stated at the end of Section 2, can be shown in the next lemma.
Lemma 8. Suppose that the biconjugate -orthonormalization procedure depicted as in Algorithm 1 underlying the BiCOR method does not breakdown; that is the situation where , and . If , then the coefficient matrices in (24), (25), (29), and (30) are nonsingular.
The next proposition shows some properties of the CSBiCOR iterates similar to those of the BiCOR method presented in Proposition 5 with some minor changes.
Proposition 9. For the CSBiCOR algorithm, let , , where , are replaced by , appropriately if the th step is a composite step; and let , . Assuming the underlying biconjugate -orthonormalization procedure does not break down, one has the following properties.(1), .(2) is diagonal.(3), where is either of order or and the sum of the dimensions of ’s is .
Proof. The properties can be proved with the same frame for those of Theorem 4.4 in .
Therefore, we can show with the above properties that the use of steps is sufficient for the CSBiCOR method to compute exactly all those well-defined BiCOR iterates stably provided that the underlying biconjugate -orthonormalization procedure depicted as in Algorithm 1 does not break down.
Proposition 10. The CSBiCOR method is able to compute exactly all those well-defined BiCOR iterates stably without pivot breakdown assuming that the underlying biconjugate -orthonormalization procedure does not break down.
Proof. By construction, ; by induction, and ; by definition, . Then it follows that . From Proposition 9, we have . Therefore, exactly satisfies the Petrov-Galerkin condition defining the BiCOR iterates.
Before ending this section, we present a best approximation result for the CSBiCOR method, which uses the same framework as that for the CSBCG method . For discussion about convergence results for Krylov subspaces methods, refer to [2, 6, 32, 33] and the references therein. To the best of our knowledge, this is the first attempt to study the convergence result for the BiCOR method .
First let and denote the Krylov subspaces generated by the biconjugate -orthonormalization procedure after running Algorithm 1 steps. The norms associated with the spaces and are defined as where, while and are symmetric positive definite matrices. Then we have the following proposition with the initial guess of the linear system being . The case with a nonzero initial guess can be treated adaptively.
Proposition 11. Suppose that for all and for all , one has where, is a constant independent of and . Furthermore, assuming that for those steps in the composite step Biconjugate -Orthogonal Residual (CSBiCOR) method in which we compute an approximation , we have Then,
Proof. From the Petrov-Galerkin condition (15), we have for , yielding with arbitrary It is noted that because of the assumption of the zero initial guess. Then taking the sup of both sides of the above equation for all with the inf-sup condition (38) to bound the left-hand side and the continuity assumption (37) to bound the right-hand side results in which yields with the triangle inequality. The final assertion (39) follows immediately since is arbitrary.
6. Implementation Issues
For the purpose of conveniently and simply presenting the CSBiCOR method, we first prove some relations simplifying the solutions of the four linear systems represented as in (24), (25), (29), and (30).
Proof. (1) If a step is taken, then , giving that because of the last-term vanishment due to Proposition 9. Analogously, the iteration similarly gives from Proposition 9 and the Hermitian property of the standard Hermitian inner product. If a step is taken, then by construction as shown in (27) and Proposition 9, we have = = = . Similarly, with Proposition 9 and the Hermitian property of the standard Hermitian inner product, it can be shown that = = = = . The equalities directly follow from Proposition 9. Finally, because the coefficient matrices of the symmetric systems in (24) and (25) respectively governing and are Hermitian to each other as shown in Lemma 7 while the corresponding right-hand sides are conjugate to each other as shown here, we can deduce that .
(2) From Proposition 9, it can directly verify that with the fact that and . Next, it can be noted that if , then . Otherwise, the first equation in the linear system (24) cannot hold for as proved in Lemma 8 and as shown in the above . Then by construction of (22) and Proposition 9, we have = = = = = with the fact that proved in the above . By construction of (21) and Proposition 9, we can also show that . Similarly to the relationships between and , we can deduce that , because the coefficient matrices of the symmetric systems in (29) and (30) respectively governing and are Hermitian to each other as shown in Lemma 7 while the corresponding right-hand sides are conjugate to each other as shown here.
(3) By construction of (19) and Proposition 9, it follows that = = = = , where . Because it is already known that = as proved in Lemma 7, we have = = = . The proof is completed.
As a result of Lemmas 6 and 12, when a step is taken, the solutions for and involved in the four linear systems represented as in (24), (25), (29), and (30) can be simplified only by solving the following two linear systems: where .
It should be emphasized that these above two systems have the same representations for the CSBCG method  but differ in detailed computational formulae for the corresponding quantities because of the different underlying procedures mentioned in Section 1. As a result, and can be explicitly represented as where .
Combining all the recurrences discussed above for either a step or a step and taking the strategy of reducing the number of matrix-vector multiplications by introducing an auxiliary vector recurrence and changing variables adopted for the BiCOR method  as well as for the BiCR method , together lead to the CSBiCOR method. The pseudocode for the preconditioned CSBiCOR with a left preconditioner can be represented by Algorithm 3, where and are used to respectively denote the unpreconditioned and the preconditioned residuals. It is obvious to note that the number of matrix-vector multiplications in this algorithm remains the same per step as in the BiCOR algorithm . Therefore, the cost of the use of composite steps is negligible. That is, composite steps cost approximately twice as much as steps just as stated for the CSBCG method . Thus, from the point of view of choosing or steps, there is no significant difference with respect to algorithm cost. However, the presented algorithm is with the motivation of obtaining smoother and, hopefully, faster convergence behavior in comparison with the BiCOR method besides eliminating pivot breakdowns.
For the issue of deciding between and updates, the heuristic based on the magnitudes of the residuals developed for the CSBCG method  is borrowed for the CSBiCOR method in order to choose the step size which maximizes numerical stability as well as to avoid overflow. The principle is to avoid “spikes” in the convergence history of the norm of the residuals. A update will be chosen if the following circumstance satisfies
For completeness of algorithm implementation, we recall the test procedure as follows. Define . Then with the scaled versions of the corresponding quantities, the tests and can be, respectively, replaced by and . Consequently, the test can be implemented with the code fragment shown in Algorithm 4.
Analogously to the effect obtained by the CSBCG method  in circumstances where (46) is satisfied, the use of updates for the CSBiCOR method is also able to cut off spikes in the convergence history of the norm of the residuals possibly caused by taking two steps in such circumstances. The resulting CSBiCOR algorithm inherits the promising properties of the CSBCG method [18, 19]. That is, the composite strategy employed can eliminate near pivot breakdowns while can provide some smoothing of the convergence history of the norm of the residuals without involving any arbitrary machine-dependent or user-supplied tolerances. However, it should be emphasized that the CSBiCOR method could only eliminate those spikes due to small pivots in the upper left principal submatrices of formed in Proposition 1 but not all spikes. So that the residual norm of the CSBiCOR method does not decrease monotonically. By the way, learning from the mathematically equivalent but numerically different variants of the CSBCG method , we could also develop such kinds of variants of the CSBiCOR method, which will be taken into further account.
7. Examples and Numerical Experiments
This section mainly focuses on the analysis of different numerical effects of the composite step strategy employed into the BiCOR method with, far from being exhaustive, a few typical circumstances of test problems as arising from electromagnetics, discretizations of 2D/3D physical domains, and acoustics, which are described in Table 1. All of them are borrowed in the MATLAB format from the University of Florida Sparse Matrix Collection provided by Davis , in which the meanings of the column headers of Table 1 can be found. The effect analysis part may provide some suggestions that when to make use of the composite step strategy should make significant progress towards an exact solution according to the convergence history of the residual norms. It is with the hope that the stabilizing effect of the CSBiCOR method could make the residual norm plot become smoother and, hopefully, faster decreasing since far outlying iterates and residuals are avoided in the smoothed sequences.
In a realistic setting, one would use a preconditioner. With appropriate preconditioning techniques, such as those existing well-established preconditioning methodologies [35–37] based on approximate inverses, all the following involved methods for numerical comparison are very attractive for solving relevant classes of non-Hermitian linear systems. But this is not the point we pursue here. All the experiments are performed without preconditioning techniques in default if without other clarification. That is, the preconditioner in Algorithm 3 will be taken as the identity matrix except otherwise clarified. Refer to the survey by Benzi  and the book by Saad  on preconditioning techniques for improving the performance and reliability of Krylov subspace methods.
For all these test problems below, the BiCOR method will be implemented using the same code as for the CSBiCOR method with the pivot test being modified to always choose 1-step update steps for the purpose of conveniently showing the stabilizing effect of the composite step strategy on the BiCOR method. So does for the BCG method as implementated in [18, 19].
The experiments have been carried out with machine precision in double precision floating point arithmetic in MATLAB 7.0.4 with a PC-Pentium () D CPU 3.00 GHz, 1 GB of RAM. We make comparisons in four aspects: number of iterations (referred to as Iters), CPU consuming time in seconds (referred to as CPU), of the updated and final true relative residual 2-norms defined, respectively, as and (referred to as Relres and TRR). Iters takes the form of “” recording the total number of iteration steps and the number of iteration steps involved in the corresponding composite step methods. Numerical results in terms of Iters, CPU and TRR are reported by means of tables while convergence histories involved are shown in figures with Iters (on the horizontal axis) versus Relres (on the vertical axis). The stopping criterion used here is that the 2-norm of the residual be reduced by a factor (referred to as TOL) of the 2-norm of the initial residual, that is, , or when Iters exceeded the maximal iteration number (referred to as MAXIT). Here, we take . All these tests are started with an initial guess equal to . Whenever the considered problem contains no right-hand side to the original linear system , let , where is the vector whose elements are all equal to unity, such that is the exact solution. A symbol “*” is used to indicate that the method did not meet the required TOL before MAXIT or did not converge at all.
7.1. Example 1: Dehghani/Light_in_Tissue
The first example is one in which the BCG method and the BiCOR method have almost quite smooth convergence behaviors and as few as near breakdowns. The CSBiCOR method has exactly the same convergence behavior as the BiCOR method by investigating Table 2 and Figure 1(b). Under such circumstances when the convergence history of the norm of the residuals is smooth enough so that the composite step strategy does not gain a lot for the CSBiCOR method, the BiCOR method is much preferred and there is no need to employ the composite step strategy because the MATLAB implementation may be probably a little penalizing for the CSBiCOR code with respect to CPU. By the way, from Figure 1(c), the CSBCG method works as predicted in [18, 19], smoothing a little the convergence history of the BCG method. In particular, the CSBCG method computes a subset of the BCG iterates by clipping three “spikes” in the BCG history by means of the composite steps from the particular investigations given in the bottom first two plots of Figures 1(d), 1(e) and Table 2.
Another interesting observation is that the good properties of the BiCOR method in terms of fast convergence speed and smooth convergence behavior in comparison with the BCG method are instinctually inherited by the CSBiCOR method so that the CSBiCOR method outperforms the CSBCG method in aspects of Iters and CPU as well as smooth effect, as shown in Table 2 and Figure 1(f).
7.2. Example 2: Kim/Kim1
This is a good test example to vividly demonstrate the efficacy of the composite step strategy as depicted in Figure 2. The BCG method, as observed in Figure 2(a), takes on many local “spikes” in the convergence curve. In such a case, the CSBCG method does a wonderful attempt to smooth the convergence history of the norm of the residuals as shown in Figure 2(c), although not leading to a monotonic decreasing convergence behavior. So does the CSBiCOR method to the BiCOR method by observation of Figure 2(d). Moreover, the CSBiCOR method seems to take less CPU than the BiCOR method while keeping approximately the same TRR when the BiCOR method is implemented using the same code as for the CSBiCOR method and the pivot test is modified to always choose 1-step update steps. Finally, the CSBiCOR method is again shown to be competitive to the CSBCG method as seen in Table 3 and Figure 2(b).
7.3. Example 3: HB/Young1c
In the third example, the BCG and BiCOR methods have small irregularities and do not converge superlinearly, as reflected in Figure 3(a). The two methods above seem to have almost the same “asymptotical” speed of convergence while the BiCOR method seems a little smoother than the BCG method. From Figures 3(c) and 3(d), the composite step strategy seems to play a surprisingly good stabilizing effect to the BCG and BiCOR method. Furthermore, the CSBCG and CSBiCOR methods take less CPU, respectively, than the BCG and BiCOR methods as reported in Table 4. It is stressed that although the CSBiCOR method consumes a little more Iters and CPU than the CSBCG method, it presents a little more smooth convergence behavior than the latter method as displayed in Figure 3(b). This is still thanks to the promising advantages of the empirically observed stability and fast convergence rate of the BiCOR method over the BCG method.
8. Concluding Remarks
We have presented a new interesting variant of the BiCOR method for solving non-Hermitian systems of linear equations. Our approach is naturally based on and inspired by the composite step strategy taken for the CSBCG method [18, 19]. It is noted that exact pivot breakdowns are rare in practice; however, near breakdowns could cause severe numerical instability. The resulting CSBiCOR method is both theoretically and numerically demonstrated to avoid near pivot breakdowns and compute all the well-defined BiCOR iterates stably with only minor modifications with the assumption that the underlying biconjugate -orthonormalization procedure does not break down. Besides reducing the number of spikes in the convergence history of the norm of the residuals to the greatest extent, the CSBiCOR method could provide some further practically desired smoothing behavior towards stabilizing the behavior of the BiCOR method when it has erratic convergence behaviors. Additionally, the CSBiCOR method seems to be superior to the CSBCG method to some extent because of the inherited promising advantages of the empirically observed stability and fast convergence rate of the BiCOR method over the BCG method.
Since the BiCOR method is the most basic variant of the family of Lanczos biconjugate -orthonormalization methods, its improvement will analogously lead to similar improvements for the CORS and BiCORSTAB methods, which is under investigation.
It is important to note the nonnegligible added complexity when deciding the composite step strategy. That is we have to make some extra computation before deciding to take a step. Therefore, when the step will not be taken after the decision test, it will lead to extra computation as well as CPU consuming time. It seems critical for us to optimize the code to implement the CSBiCOR method.
The authors would like to thank Professor Randolph E. Bank for showing us the “PLTMG” software package and for his insightful and beneficial discussions and suggestions. Furthermore, the authors wish to acknowledge Professor Zhongxiao Jia and the anonymous referees for their suggestions in the presentation of this article. This research is supported by NSFC (60973015, 61170311, 11126103, 11201055, 11171054), the Specialized Research Fund for the Doctoral Program of Higher Education (20120185120026), Sichuan Province Sci. & Tech. Research Project (2011JY0002), and the Fundamental Research Funds for the Central Universities.
- Y. Saad and H. A. van der Vorst, “Iterative solution of linear systems in the 20th century,” Journal of Computational and Applied Mathematics, vol. 43, pp. 1155–1174, 2005.
- V. Simoncini and D. B. Szyld, “Recent computational developments in Krylov subspace methods for linear systems,” Numerical Linear Algebra with Applications, vol. 14, no. 1, pp. 1–59, 2007.
- J. Dongarra and F. Sullivan, “Guest editors’introduction to the top 10 algorithms,” Computer Science and Engineering, vol. 2, pp. 22–23, 2000.
- B. Philippe and L. Reichel, “On the generation of Krylov subspace bases,” Applied Numerical Mathematics, vol. 62, no. 9, pp. 1171–1186, 2012.
- A. Greenbaum, Iterative Methods for Solving Linear Systems, vol. 17 of Frontiers in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1997.
- Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, Pa, USA, 2nd edition, 2003.
- Y.-F. Jing, T.-Z. Huang, Y. Zhang et al., “Lanczos-type variants of the COCR method for complex nonsymmetric linear systems,” Journal of Computational Physics, vol. 228, no. 17, pp. 6376–6394, 2009.
- Y.-F. Jing, B. Carpentieri, and T.-Z. Huang, “Experiments with Lanczos biconjugate -orthonormalization methods for MoM discretizations of Maxwell’s equations,” Progress in Electromagnetics Research, vol. 99, pp. 427–451, 2009.
- Y.-F. Jing, T.-Z. Huang, Y. Duan, and B. Carpentieri, “A comparative studyof iterative solutions to linear systems arising in quantum mechanics,” Journal of Computational Physics, vol. 229, no. 22, pp. 8511–8520, 2010.
- B. Carpentieri, Y.-F. Jing, and T.-Z. Huang, “The BICOR and CORS iterative algorithms for solving nonsymmetric linear systems,” SIAM Journal on Scientific Computing, vol. 33, no. 5, pp. 3020–3036, 2011.
- R. Fletcher, “Conjugate gradient methods for indefinite systems,” in Numerical Analysis (Proc 6th Biennial Dundee Conf., Univ. Dundee, Dundee, 1975), vol. 506 of Lecture Notes in Mathematics, pp. 73–89, Springer, Berlin, Germany, 1976.
- W. Joubert, Generalized conjugate gradient and lanczos methods for the solution of nonsymmetric systems of linear equations [Ph.D. thesis], University of Texas, Austin, Tex, USA, 1990.
- W. Joubert, “Lanczos methods for the solution of nonsymmetric systems of linear equations,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 3, pp. 926–943, 1992.
- M. H. Gutknecht, “A completed theory of the unsymmetric Lanczos process and related algorithms. I,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 2, pp. 594–639, 1992.
- M. H. Gutknecht, “A completed theory of the unsymmetric Lanczos process and related algorithms. II,” SIAM Journal on Matrix Analysis and Applications, vol. 15, no. 1, pp. 15–58, 1994.
- M. H. Gutknecht, “Block Krylov space methods for linear systems withmultiple right-hand sides: an introduction,” in Modern Mathematical Models, Methods and Algorithms for Real World Systems, A. H. Siddiqi, I. S. Duff, and O. Christensen, Eds., Anamaya Publishers, New Delhi, India, 2006.
- D. G. Luenberger, “Hyperbolic pairs in the method of conjugate gradients,” SIAM Journal on Applied Mathematics, vol. 17, pp. 1263–1267, 1969.
- R. E. Bank and T. F. Chan, “An analysis of the composite step biconjugate gradient method,” Numerische Mathematik, vol. 66, no. 3, pp. 295–319, 1993.
- R. E. Bank and T. F. Chan, “A composite step bi-conjugate gradient algorithm for nonsymmetric linear systems,” Numerical Algorithms, vol. 7, no. 1, pp. 1–16, 1994.
- R. W. Freund and N. M. Nachtigal, “QMR: a quasi-minimal residual method for non-Hermitian linear systems,” Numerische Mathematik, vol. 60, no. 3, pp. 315–339, 1991.
- M. H. Gutknecht, “The unsymmetric Lanczos algorithms and their relations to Pade approximation, continued fraction and the QD algorithm,” in Proc. Copper Mountain Conference on Iterative Methods, Book 2, Breckenridge Co., Breckenridge, Colo, USA, 1990.
- B. N. Parlett, D. R. Taylor, and Z. A. Liu, “A look-ahead Lánczos algorithm for unsymmetric matrices,” Mathematics of Computation, vol. 44, no. 169, pp. 105–124, 1985.
- B. N. Parlett, “Reduction to tridiagonal form and minimal realizations,” SIAM Journal on Matrix Analysis and Applications, vol. 13, no. 2, pp. 567–593, 1992.
- C. Brezinski, M. Redivo Zaglia, and H. Sadok, “Avoiding breakdown and near-breakdown in Lanczos type algorithms,” Numerical Algorithms, vol. 1, no. 3, pp. 261–284, 1991.
- C. Brezinski, M. Redivo Zaglia, and H. Sadok, “A breakdown-free Lanczos type algorithm for solving linear systems,” Numerische Mathematik, vol. 63, no. 1, pp. 29–38, 1992.
- C. Brezinski, M. Redivo-Zaglia, and H. Sadok, “Breakdowns in the implementation of the Lánczos method for solving linear systems,” Computers & Mathematics with Applications, vol. 33, no. 1-2, pp. 31–44, 1997.
- C. Brezinski, M. R. Zaglia, and H. Sadok, “New look-ahead Lanczos-type algorithms for linear systems,” Numerische Mathematik, vol. 83, no. 1, pp. 53–85, 1999.
- N. M. Nachtigal, A look-ahead variant of the lanczos algorithm and its application to the quasi-minimal residual method for non-Hermitian linear systems [Ph.D. thesis], Massachusettes Institute of Technology, Cambridge, Mass, USA, 1991.
- R. W. Freund, M. H. Gutknecht, and N. M. Nachtigal, “An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices,” SIAM Journal on Scientific Computing, vol. 14, no. 1, pp. 137–158, 1993.
- M. H. Gutknecht, “Lanczos-type solvers for nonsymmetric linear systems of equations,” in Acta Numerica, 1997, vol. 6 of Acta Numerica, pp. 271–397, Cambridge University Press, Cambridge, UK, 1997.
- T. Sogabe, M. Sugihara, and S.-L. Zhang, “An extension of the conjugate residual method to nonsymmetric linear systems,” Journal of Computational and Applied Mathematics, vol. 226, no. 1, pp. 103–113, 2009.
- P. Concus, G. H. Golub, and D. P. O'Leary, “A generalized conjugate gradient method for the numerical solution of elliptic partial differential equations,” in Sparse Matrix Computations (Proc. Sympos., Argonne Nat. Lab., Lemont, Ill., 1975), pp. 309–332, Academic Press, New York, NY, USA, 1976.
- A. B. J. Kuijlaars, “Convergence analysis of Krylov subspace iterations with methods from potential theory,” SIAM Review, vol. 48, no. 1, pp. 3–40, 2006.
- T. Davis, “The university of Florida sparse matrix collection,” NA Digest, vol. 97, no. 23, 1997.
- T. Huckle, “Approximate sparsity patterns for the inverse of a matrix and preconditioning,” Applied Numerical Mathematics, vol. 30, no. 2-3, pp. 291–303, 1999.
- G. A. Gravvanis, “Explicit approximate inverse preconditioning techniques,” Archives of Computational Methods in Engineering, vol. 9, no. 4, pp. 371–402, 2002.
- B. Carpentieri, I. S. Duff, L. Giraud, and G. Sylvand, “Combining fast multipole techniques and an approximate inverse preconditioner for large electromagnetism calculations,” SIAM Journal on Scientific Computing, vol. 27, no. 3, pp. 774–792, 2005.
- M. Benzi, “Preconditioning techniques for large linear systems: a survey,” Journal of Computational Physics, vol. 182, no. 2, pp. 418–477, 2002.