Advances in Numerical Analysis

Advances in Numerical Analysis / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 973407 | https://doi.org/10.1155/2012/973407

N. Akhondi, F. Toutounian, "Accelerated Circulant and Skew Circulant Splitting Methods for Hermitian Positive Definite Toeplitz Systems", Advances in Numerical Analysis, vol. 2012, Article ID 973407, 17 pages, 2012. https://doi.org/10.1155/2012/973407

Accelerated Circulant and Skew Circulant Splitting Methods for Hermitian Positive Definite Toeplitz Systems

Academic Editor: Ivan Ganchev Ivanov
Received05 Aug 2011
Revised26 Oct 2011
Accepted26 Oct 2011
Published11 Dec 2011

Abstract

We study the CSCS method for large Hermitian positive definite Toeplitz linear systems, which first appears in Ng's paper published in (Ng, 2003), and CSCS stands for circulant and skew circulant splitting of the coefficient matrix . In this paper, we present a new iteration method for the numerical solution of Hermitian positive definite Toeplitz systems of linear equations. The method is a two-parameter generation of the CSCS method such that when the two parameters involved are equal, it coincides with the CSCS method. We discuss the convergence property and optimal parameters of this method. Finally, we extend our method to BTTB matrices. Numerical experiments are presented to show the effectiveness of our new method.

1. Introduction

We consider iterative solution of the large system of linear equations where is Hermitian positive definite Toeplitz matrix and . An -by- matrix is said to be Toeplitz if ; that is, is constant along its diagonals. Toeplitz systems arise in a variety of applications, especially in signal processing and control theory. Many direct methods are proposed for solving Toeplitz linear systems. A straightforward application of Gaussian elimination will lead to an algorithm with complexity. There are a number of fast Toeplitz solvers that decrease the complexity to operations, see for instance [13]. Around 1980, superfast direct Toeplitz solvers of complexity , such as the one by Ammar and Gragg [4], were also developed. Recent research on using the preconditioned conjugate gradient method as an iterative method for solving Toeplitz systems has brought much attention. One of the main important results of this methodology is that the PCG method has a computational complexity proportional to for a large class of problem [5] and is therefore competitive with any direct method.

In [6], an iterative method based on circulant and skew circulant splitting (CSCS) of the Toeplitz matrix was given. The authors have driven an upper bound of the contraction factor of the CSCS iteration which is dependent solely on the spectra of the circulant and the skew circulant matrices involved.

In [7] the authors studied the HSS iteration method for large sparse non-Hermitian positive definite Toeplitz linear systems, which first appears in [8]. They used the HSS iteration method based on a special case of the HSS splitting, where the symmetric part is a centrosymmetric matrix and skew-symmetric part is a skew-centrosymmetric matrix for a given Toeplitz matrix and discussed the computational complexity of the HSS and IHSS methods.

In this paper we present an efficient iterative method for the numerical solution of Hermitian positive definite Toeplitz systems of linear equations. The method is a two-parameter generation of the CSCS method such that when the two parameters involved are equal, it coincides with the CSCS method. We discuss the convergence property and optimal parameters of this method. Then we extend our method to block-Toeplitz-Toeplitz-block (BTTB) matrices.

For convenience, some of the terminology used in this paper will be given. The symbol will denote the set of all complex matrices. Let . We use the notation if is Hermitian positive (semi-)definite. If and are both Hermitian, we write if and only if . For a Hermitian positive definite matrix , we define the norm of a vector as . Then the induced norm of a matrix is defined as .

The organization of this paper is as follows. In Section 2, we present accelerated circulant and skew circulant splitting (ACSCS) method for Toeplitz systems. In Section 3, we study the convergence properties and analyze the convergence rate of ACSCS iteration and derive optimal parameters. The convergence results of ACSCS method for BTTB matrices are given in Section 4. Numerical experiments are presented in Section 5 to show the effectiveness of our new method. Finally some conclusions are given in Section 6.

2. Accelerated Circulant and Skew Circulant Splitting Method

Let us begin by supposing that the entries of -by- Toeplitz matrix are the Fourier coefficients of the real generating function defined on . Since is a real-valued function, for all integers and is a Hermitian matrix. For Hermitian Toeplitz matrix we note that it can always be split as where

Clearly is Hermitian circulant matrix and is Hermitian skew circulant matrix. The positive definiteness of and is given in the following theorem. Its proof is similar to that of Theorem 2 in [9].

Theorem 2.1. Let be a real-valued function in the Wiener class and satisfies the condition Then the circulant matrix and the skew circulant matrix , defining by the splitting , are uniformly positive and bounded for sufficiently large .

The subscript of matrices is omitted hereafter whenever there is no confusion.

Based on the splitting (2.2), Ng [6] presented the CSCS iteration method: given an initial guess , for , until converges, compute where is a given positive constant. He has also proved that if the circulant and the skew circulant splitting matrices are positive definite, then the CSCS method converges to the unique solution of the system of linear equations. Moreover, he derived an upper bound of the contraction of the CSCS iteration which is dependent solely on the spectra of the circulant and the skew circulant matrices and , respectively.

In this paper, based on the CSCS splitting, we present a different approach to solve (1.1) with the Hermitian positive definite coefficient matrix, called the Accelerated Circulant and Skew Circulant Splitting method, shortened to the ACSCS iteration. Let us describe it as follows.

The ACSCS iteration method: given an initial guess , for , until converges, compute where is a given nonnegative constant and is given positive constant.

The ACSCS iteration alternates between the circulant matrix and the skew circulant matrix . Theoretical analysis shows that if the coefficient matrix is Hermitian positive definite the ACSCS iteration (2.6) can converge to the unique solution of linear system (1.1) with any given nonnegative , if is restricted in an appropriate region. And the upper bound of contraction factor of the ACSCS iteration is dependent on the choice of , , the spectra of the circulant matrix , and the skew circulant matrix . The two steps at each ACSCS iterate require exact solutions with the matrices and . Since circulant matrices can be diagonalized by the discrete Fourier matrix and skew circulant matrices can be diagonalized by the diagonal times discrete Fourier [10], that is, where and are diagonal matrices holding the eigenvalues of and , respectively, the exact solutions with circulant matrices and skew circulant matrices can be obtained by using fast Fourier transforms (FFTs). In particular, the number of operations required for each step of the ACSCS iteration method is .

Noting that the roles of the matrices and in (2.6) can be reverse, we can first solve the system of linear equation with the and then solve the system of linear equation with coefficient matrix .

3. Convergence Analysis of the ACSCS Iteration

In this section we study the convergence rate of the ACSCS iteration and we suppose that the entries of are the Fourier coefficient of the real generating function that satisfies the conditions of Theorem 2.1. So, for sufficiently large , the matrices , , and will be Hermitian positive definite. Let us denote the eigenvalues of and by , , and the minimum and maximum eigenvalues of and by and , respectively. Therefore, from Theorem 2.1, for sufficiently large we have and .

We first note that the ACSCS iteration method can be generalized to the two-step splitting iteration framework, and the following lemma describes a general convergence criterion for a two-step splitting iteration.

Lemma 3.1. Let , be two splitting of the matrix , and be a given initial vector. If is a two-step iteration sequence defined by , then Moreover, if the spectral radius , then the iterative sequence converges to the unique solution of the system of linear equations (1.1) for all initial vectors .

Applying this lemma to the ACSCS iteration, we obtain the following convergence property.

Theorem 3.2. Let be a Hermitian positive definite Toeplitz matrix, and let be its Hermitian positive circulant and skew circulant parts, be a nonnegative constant, and be a positive constant. Then the iteration matrix of the ACSCS method is and its spectral radius is bounded by where , are eigenvalues of , , respectively. And for any given parameter if then , that is, the ACSCS iteration converges, where are the minimum eigenvalue of and , respectively.

Proof. Setting in Lemma 3.1. Since and are nonsingular for any nonnegative constant and positive , we get (3.3).
By similarity transformation, we have Then the bound for is given by (3.4).
Since , , and satisfies the relation (3.5), the following equalities hold: so .

Theorem 3.2 mainly discusses the available for a convergent ACSCS iteration for any given nonnegative . It also shows that the choice of is dependent on the minimum eigenvalue of the circulant matrix and the skew circulant matrix and the choice of . Notice that then we remark that for any the available exists. And if and are large, then the restriction put on is loose. The bound on of the convergence rate depends on the spectrum of and and the choice of and . Moreover, is also an upper bound of the contraction of the ACSCS iteration.

Moreover, from the proof of Theorem 3.2 we can simplify the bound as

In the following lemma, we list some useful relations related to the minimum and maximum eigenvalues of matrices and , which are essential for us to obtain the optimal parameters and and to describe their properties.

Lemma 3.3. Let , and , , then the following relations hold:

Proof. The equalities follow from straightforward computation.

Theorem 3.4. Let , , be the matrices defined in Theorem 3.2 and , , , be defined as Lemma 3.3. Then the optimal should be and they satisfy the relations And the optimal bound is

Proof. From Theorem 3.2 and (3.8) there exist a and such that respectively. In order to minimize the bound in (3.10), the following equalities must hold: By using , , , , the above equalities can be rewritten as These relations imply that By putting , the parameters and will be the roots of the quadratic polynomial Solving this equation we get the parameters and given by (3.16) and (3.17), respectively. These parameters and can be considered as optimal parameters if they satisfy the relations (3.18)–(3.20).
From (3.12), (3.15) and (3.11), (3.15), we have respectively. From these inequalities, by the definition of and simple computation, we get . By similarity computation, we can also show that . So, the parameters and satisfy the relations (3.18) and (3.19).
Moreover, for the optimal parameter and , we have By similarity computation, we obtain . So, the parameters and satisfy the relation (3.20).
Finally, by denoting and substituting and in (3.10) and using the relations (3.11)–(3.15), we obtain the optimal bound

Remark 3.5. We remark that if the eigenvalues of matrices and contain in , and we estimate , as [6], by then by taking , we obtain which are the same as those given in [6] for Hermitian positive definite matrix .

4. ACSCS Iteration Method for the BTTB Matrices

In this section we extend our method to block-Toeplitz-Toeplitz-block (BTTB) matrices of the form with Similar to the Toeplitz matrix, the BTTB matrix possesses a splitting [11]: where is a block-circulant-circulant-block (BCCB) matrix, is a block-circulant-skew-circulant-block (BCSB) matrix, is a block-skew-circulant-circulant-block (BSCB) matrix, and is a block-skew-circulant-skew-circulant block (BSSB) matrix. We note that the matrices , , , and can be diagonalized by , , , and , respectively. Therefore, the systems of linear equations with coefficient matrices , , , and , where for are positive constants, can be solved efficiently using FFTs. The total number of operations required for each step of the method is where is the size of the BTTB matrix . Based on the splitting of given in (4.3), the ACSCS iteration is as follows.

The ACSCS iteration method for BTTB matrix: given an initial guess , for , until converges, compute where , are given positive constants.

In the sequel, we need the following definition and results.

Definition 4.1 (see [12]). A splitting is called -regular if is Hermitian positive definite.

Theorem 4.2 (see [13]). Let be Hermitian positive definite. Then is a -regular splitting if and only if .

Lemma 4.3 (see [14]). Suppose be two Hermitian matrices, then where and denote the minimum and the maximum eigenvalues of matrix , respectively.

Now we give the main results as follows.

Theorem 4.4. Let be a Hermitian positive definite BTTB matrix, and , , , and be its BCCB, BCSB, BSCB, and BSSB parts, and , be positive constants. Then the iteration matrix of the ACSCS method for BTTB matrices is And if then the spectral radius , and the ACSCS iteration converges to the unique solution of the system of linear equations (1.1) for all initial vectors .

Proof. By the definitions of BCCB, BCSB, BSCB, and BSSB parts of , the matrices , , , and are Hermitian. Let us consider the Hermitian matrices Since is Hermitian positive definite, it follows that By the assumptions (4.7) and Lemma 4.3, we have also The relations (4.9) and (4.10) imply that , for . So, the matrices , for , are nonsingular and we get (4.6). In addition, the splittings are -regular and by Theorem 4.2, we have Finally, by using (4.11), we can obtain which completes the proof.

5. Numerical Experiments

In this section, we compare the ACSCS method with CSCS and CG methods for 1D and 2D Toeplitz problems. We used the vector of all ones for the right-hand side vector . All tests are started from the zero vector, performed in MATLAB 7.6 with double precision, and terminated when or when the number of iterations is over 1000. This case is denoted by the symbol “−”. Here is the residual vector of the system of linear equation (1.1) at the current iterate , and is the initial one.

For 1D Toeplitz problems (Examples 5.15.3), our comparisons are done for the number of iterations of the CG, CSCS, and ACSCS methods (denoted by “IT”) and the elapsed CPU time (denoted by “CPU”). All numerical results are performed for . The corresponding numerical results are listed in Tables 14. In these tables and represent the minimum eigenvalue of matrices and , respectively. Note that the CPU time in these tables does not account those for computing the iteration parameters. For ACSCS method, and are computed by (3.16) and (3.17), respectively. And for CSCS method, is computed by (3.34).


ITCPU
CGCSCS ACSCS CG CSCSACSCS

16 0.4183 0.5825 8 35 37 0.0077 0.0101 0.0095
32 0.4801 0.5199 20 39 39 0.0085 0.0106 0.0100
64 0.4951 0.5049 37 40 39 0.0098 0.0108 0.0106
128 0.4988 0.5012 55 40 40 0.0108 0.0115 0.0128
256 0.4997 0.5003 67 40 40 0.0138 0.0142 0.0144
512 0.4991 0.5001 70 40 40 0.0189 0.0203 0.0205
1024 0.5000 0.5000 71 40 40 0.0283 0.0316 0.0292


ITCPU
CGCSCS ACSCS CG CSCSACSCS

16 0.4478 0.4177 12 8 8 0.0074 0.0079 0.0079
32 0.4419 0.4247 15 9 9 0.0076 0.0087 0.0084
64 0.4377 0.4292 17 10 10 0.0079 0.0090 0.0091
128 0.4355 0.4315 19 11 11 0.0083 0.0094 0.0093
256 0.4344 0.4325 20 12 12 0.0095 0.0106 0.0103
512 0.4339 0.4330 21 13 13 0.0101 0.0111 0.0111
1024 0.4337 0.4333 22 14 14 0.0148 0.0161 0.0150


ITCPU
CGCSCS ACSCS CG CSCSACSCS

16 1.0621 0.1280 8 20 10 0.0072 0.00950.0080
32 0.7746 −0.0251 16 13 0.00770.0085
64 0.6285 −0.1005 23 150.00820.0090
128 0.5169 −0.1379 28 180.00960.0096
256 0.4410 0.1485 32 20 15 0.0107 0.0117 0.0100
512 0.3841 0.1207 34 23 16 0.0128 0.0139 0.0125
1024 0.3444 0.1067 35 24 17 0.0173 0.0245 0.0185


ITCPU
CGCSCS ACSCSCGCSCSACSCS

16 0.8963 −0.0771 8 12 0.00770.0082
32 0.5967 −0.2367 16 18 0.00820.0090
64 0.4444 0.1194 26 22 16 0.00900.00920.0093
128 0.3282 0.0025 36 20 0.01020.0105
256 0.2490 0.0473 47 27 0.01230.0130
512 0.1898 −0.0012 59 29 0.01700.0171
1024 0.1484 0.0125 68 31 0.02790.0297

Example 5.1 (see [10]). In this example is symmetric positive definite Toeplitz matrix, with generating function , . Numerical results for this example are listed in Table 1.

Example 5.2 (see [15]). Let be a Hermitian positive definite Toeplitz matrix, The associated generating function is , . Numerical results of this example are presented in Table 2.

Example 5.3 (see [10]). In this example is Hermitian positive definite Toeplitz matrix, with generating function where and are the maximum and minimum values of on , respectively. In Tables 3 and 4, numerical results are reported for , and , , respectively.

In the following, we summarize the observation from Tables 14. In all cases, in terms of CPU time needed for convergence, the ACSCS converges at the same rate that the CG method converges. However, the number of ACSCS iterations is less than that of CG iterations required for convergence. The convergence behavior of ACSCS method, in terms of the number of iterations and CPU time needed for convergence, is similar to that of CSCS method when and are positive and not too small (all the cases in Tables 1 and 2). Moreover, we observe that, when and are too small or negative (the cases in Table 3 and the cases in Table 4), the ACSCS method converges at the same rate that the CG converges, but the CSCS method does not converge. These results imply that the computational efficiency of the ACSCS iteration method is similar to that of the CG method and is higher than that of the CSCS iterations.

For 2D Toeplitz problems, we tested three problems of the form given in (4.1) with the diagonals of the blocks . The diagonals of are given by the generating sequences (see [10])(a), ,(b), ,(c), .

The generating sequences (b) and (c) are absolutely summable while (a) is not. Our comparisons are done for the number of iterations of the CG, CSCS, and ACSCS methods (denoted by “IT”). All numerical results are performed for . The corresponding numerical results are listed in Table 5. For ACSCS method, parameters , are used. For CSCS method, we used . Table 5 shows that, in all cases, the number of ACSCS iterations required for convergence is less than that of CSCS method and more than that of CG method. We mention that the relations (4.7) are sufficient conditions for convergence of ACSCS iteration for BTTB matrices. The numerical experiments show that the convergence behavior of ACSCS method, in terms of the number of iterations needed for convergence, is better than that of CG and CSCS methods if one of the parameters , is chosen less than the corresponding lower bound given in Theorem 4.2. Table 6 presents the results which are obtained for the ACSCS method with , , , and . Table 7 presents the results obtained for CSCS method with the optimal parameter , obtained computationally by trial and error. As we observe, from Tables 57, the number of ACSCS iterations required for convergence is less than that of CG and CSCS methods.


Sequence (a) Sequence (b)Sequence (c)
CG ACSCS CSCS CGACSCS CSCS CG ACSCS CSCS

8 15 35 42 15 30 36 10 17 19
16 28 50 57 27 43 49 16 25 28
32 37 60 65 35 51 56 23 34 37
64 45 64 68 41 54 58 30 41 44
128 49 63 66 46 54 56 37 46 49


Sequence (a) Sequence (b) Sequence (c)
IT IT IT

8 15 14 13
16 15 15 14
32 17 17 15
64 20 20 16
128 22 22 17


Sequence (a)Sequence (b)Sequence (c)
IT IT IT

8 2.4826 2.31 23 1.18 15
16 3.7535 3.53 31 1.79 20
32 5.1442 4.65 36 2.41 25
64 6.3344 5.70 39 3.17 30
128 7.3944 6.74 39 3.93 34

These results imply that the computational efficiency of the ACSCS iteration method is similar to that of the CG method and is higher than that of the CSCS iterations.

6. Conclusion

In this paper, a new iteration method for the numerical solution of Hermitian positive definite Toeplitz systems of linear equations has been described. This method, which called ACSCS method, is a two- (four-) parameter generation of the CSCS of Ng for 1D (2D) problems and is based on the circulant and the skew circulant splitting of the Toeplitz matrix. We theoretically studied the convergence properties of the ACSCS method. Moreover, the contraction factor of the ACSCS iteration and its optimal parameters are derived. Theoretical considerations and numerical examples indicate that the splitting method is extremely effective when the generation function is positive. Numerical results also showed that the computational efficiency of the ACSCS iteration method is similar to that of the CG method and is higher than that of the CSCS iteration method.

Acknowledgment

The authors would like to thank the referee and the editor very much for their many valuable and thoughtful suggestions for improving this paper.

References

  1. P. Delsarte and Y. V. Genin, “The split Levinson algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 34, no. 3, pp. 470–478, 1986. View at: Publisher Site | Google Scholar
  2. N. Levinson, “The Wiener RMS (root mean square) error criterion in filter design and prediction,” Journal of Mathematics and Physics, vol. 25, pp. 261–278, 1947. View at: Google Scholar
  3. W. F. Trench, “An algorithm for the inversion of finite Toeplitz matrices,” vol. 12, pp. 515–522, 1964. View at: Google Scholar | Zentralblatt MATH
  4. G. S. Ammar and W. B. Gragg, “Superfast solution of real positive definite Toeplitz systems,” SIAM Journal on Matrix Analysis and Applications, vol. 9, no. 1, pp. 61–76, 1988. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  5. G. Strang, “Proposal for Toeplitz matrix calculations,” Studies in Applied Mathematics, vol. 74, no. 2, pp. 171–176, 1986. View at: Google Scholar
  6. M. K. Ng, “Circulant and skew circulant splitting methods for Toeplitz systems,” Journal of Computational and Applied Mathematics, vol. 159, no. 1, pp. 101–108, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  7. C. Gu and Z. Tian, “On the HSS iteration methods for positive definite Toeplitz linear systems,” Journal of Computational and Applied Mathematics, vol. 224, no. 2, pp. 709–718, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. Z. Z. Bai, G. H. Golub, and M. K. Ng, “Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 603–626, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  9. T. K. Ku and C. C. J. Kuo, “Design and analysis of Toeplitz preconditioners,” IEEE Transactions on Signal Processing, vol. 40, no. 1, pp. 129–141, 1992. View at: Publisher Site | Google Scholar
  10. M. K. Ng, Iterative Methods for Toeplitz Systems, Oxford University Press, New York, NY, USA, 2004.
  11. R. H. Chan and M. K. Ng, “Conjugate gradient methods for Toeplitz systems,” SIAM Review, vol. 38, no. 3, pp. 427–482, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. J. M. Ortega, Numerical Analysis. A Second Course, Academic Press, New York, NY, USA, 1972.
  13. A. Frommer and D. B. Syzld, “Weighted max norms, splittings, and overlapping additive Schwarz iterations,” Numerische Mathematik, vol. 5, p. 4862, 1997. View at: Google Scholar
  14. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, New York, NY, USA, 1985.
  15. R. H. Chan, “Circulant preconditioners for Hermitian Toeplitz systems,” SIAM Journal on Matrix Analysis and Applications, vol. 10, no. 4, pp. 542–550, 1989. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet

Copyright © 2012 N. Akhondi and F. Toutounian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1057
Downloads595
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.