`Advances in Numerical AnalysisVolume 2012 (2012), Article ID 973407, 17 pageshttp://dx.doi.org/10.1155/2012/973407`
Research Article

## Accelerated Circulant and Skew Circulant Splitting Methods for Hermitian Positive Definite Toeplitz Systems

School of Mathematical Sciences, Ferdowsi University of Mashhad, P.O. Box 1159-91775, Mashhad 9177948953, Iran

Received 5 August 2011; Revised 26 October 2011; Accepted 26 October 2011

Copyright © 2012 N. Akhondi and F. Toutounian. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We study the CSCS method for large Hermitian positive definite Toeplitz linear systems, which first appears in Ng's paper published in (Ng, 2003), and CSCS stands for circulant and skew circulant splitting of the coefficient matrix . In this paper, we present a new iteration method for the numerical solution of Hermitian positive definite Toeplitz systems of linear equations. The method is a two-parameter generation of the CSCS method such that when the two parameters involved are equal, it coincides with the CSCS method. We discuss the convergence property and optimal parameters of this method. Finally, we extend our method to BTTB matrices. Numerical experiments are presented to show the effectiveness of our new method.

#### 1. Introduction

We consider iterative solution of the large system of linear equations where is Hermitian positive definite Toeplitz matrix and . An -by- matrix is said to be Toeplitz if ; that is, is constant along its diagonals. Toeplitz systems arise in a variety of applications, especially in signal processing and control theory. Many direct methods are proposed for solving Toeplitz linear systems. A straightforward application of Gaussian elimination will lead to an algorithm with complexity. There are a number of fast Toeplitz solvers that decrease the complexity to operations, see for instance [13]. Around 1980, superfast direct Toeplitz solvers of complexity , such as the one by Ammar and Gragg [4], were also developed. Recent research on using the preconditioned conjugate gradient method as an iterative method for solving Toeplitz systems has brought much attention. One of the main important results of this methodology is that the PCG method has a computational complexity proportional to for a large class of problem [5] and is therefore competitive with any direct method.

In [6], an iterative method based on circulant and skew circulant splitting (CSCS) of the Toeplitz matrix was given. The authors have driven an upper bound of the contraction factor of the CSCS iteration which is dependent solely on the spectra of the circulant and the skew circulant matrices involved.

In [7] the authors studied the HSS iteration method for large sparse non-Hermitian positive definite Toeplitz linear systems, which first appears in [8]. They used the HSS iteration method based on a special case of the HSS splitting, where the symmetric part is a centrosymmetric matrix and skew-symmetric part is a skew-centrosymmetric matrix for a given Toeplitz matrix and discussed the computational complexity of the HSS and IHSS methods.

In this paper we present an efficient iterative method for the numerical solution of Hermitian positive definite Toeplitz systems of linear equations. The method is a two-parameter generation of the CSCS method such that when the two parameters involved are equal, it coincides with the CSCS method. We discuss the convergence property and optimal parameters of this method. Then we extend our method to block-Toeplitz-Toeplitz-block (BTTB) matrices.

For convenience, some of the terminology used in this paper will be given. The symbol will denote the set of all complex matrices. Let . We use the notation if is Hermitian positive (semi-)definite. If and are both Hermitian, we write if and only if . For a Hermitian positive definite matrix , we define the norm of a vector as . Then the induced norm of a matrix is defined as .

The organization of this paper is as follows. In Section 2, we present accelerated circulant and skew circulant splitting (ACSCS) method for Toeplitz systems. In Section 3, we study the convergence properties and analyze the convergence rate of ACSCS iteration and derive optimal parameters. The convergence results of ACSCS method for BTTB matrices are given in Section 4. Numerical experiments are presented in Section 5 to show the effectiveness of our new method. Finally some conclusions are given in Section 6.

#### 2. Accelerated Circulant and Skew Circulant Splitting Method

Let us begin by supposing that the entries of -by- Toeplitz matrix are the Fourier coefficients of the real generating function defined on . Since is a real-valued function, for all integers and is a Hermitian matrix. For Hermitian Toeplitz matrix we note that it can always be split as where

Clearly is Hermitian circulant matrix and is Hermitian skew circulant matrix. The positive definiteness of and is given in the following theorem. Its proof is similar to that of Theorem 2 in [9].

Theorem 2.1. Let be a real-valued function in the Wiener class and satisfies the condition Then the circulant matrix and the skew circulant matrix , defining by the splitting , are uniformly positive and bounded for sufficiently large .

The subscript of matrices is omitted hereafter whenever there is no confusion.

Based on the splitting (2.2), Ng [6] presented the CSCS iteration method: given an initial guess , for , until converges, compute where is a given positive constant. He has also proved that if the circulant and the skew circulant splitting matrices are positive definite, then the CSCS method converges to the unique solution of the system of linear equations. Moreover, he derived an upper bound of the contraction of the CSCS iteration which is dependent solely on the spectra of the circulant and the skew circulant matrices and , respectively.

In this paper, based on the CSCS splitting, we present a different approach to solve (1.1) with the Hermitian positive definite coefficient matrix, called the Accelerated Circulant and Skew Circulant Splitting method, shortened to the ACSCS iteration. Let us describe it as follows.

The ACSCS iteration method: given an initial guess , for , until converges, compute where is a given nonnegative constant and is given positive constant.

The ACSCS iteration alternates between the circulant matrix and the skew circulant matrix . Theoretical analysis shows that if the coefficient matrix is Hermitian positive definite the ACSCS iteration (2.6) can converge to the unique solution of linear system (1.1) with any given nonnegative , if is restricted in an appropriate region. And the upper bound of contraction factor of the ACSCS iteration is dependent on the choice of , , the spectra of the circulant matrix , and the skew circulant matrix . The two steps at each ACSCS iterate require exact solutions with the matrices and . Since circulant matrices can be diagonalized by the discrete Fourier matrix and skew circulant matrices can be diagonalized by the diagonal times discrete Fourier [10], that is, where and are diagonal matrices holding the eigenvalues of and , respectively, the exact solutions with circulant matrices and skew circulant matrices can be obtained by using fast Fourier transforms (FFTs). In particular, the number of operations required for each step of the ACSCS iteration method is .

Noting that the roles of the matrices and in (2.6) can be reverse, we can first solve the system of linear equation with the and then solve the system of linear equation with coefficient matrix .

#### 3. Convergence Analysis of the ACSCS Iteration

In this section we study the convergence rate of the ACSCS iteration and we suppose that the entries of are the Fourier coefficient of the real generating function that satisfies the conditions of Theorem 2.1. So, for sufficiently large , the matrices , , and will be Hermitian positive definite. Let us denote the eigenvalues of and by , , and the minimum and maximum eigenvalues of and by and , respectively. Therefore, from Theorem 2.1, for sufficiently large we have and .

We first note that the ACSCS iteration method can be generalized to the two-step splitting iteration framework, and the following lemma describes a general convergence criterion for a two-step splitting iteration.

Lemma 3.1. Let , be two splitting of the matrix , and be a given initial vector. If is a two-step iteration sequence defined by , then Moreover, if the spectral radius , then the iterative sequence converges to the unique solution of the system of linear equations (1.1) for all initial vectors .

Applying this lemma to the ACSCS iteration, we obtain the following convergence property.

Theorem 3.2. Let be a Hermitian positive definite Toeplitz matrix, and let be its Hermitian positive circulant and skew circulant parts, be a nonnegative constant, and be a positive constant. Then the iteration matrix of the ACSCS method is and its spectral radius is bounded by where , are eigenvalues of , , respectively. And for any given parameter if then , that is, the ACSCS iteration converges, where are the minimum eigenvalue of and , respectively.

Proof. Setting in Lemma 3.1. Since and are nonsingular for any nonnegative constant and positive , we get (3.3).
By similarity transformation, we have Then the bound for is given by (3.4).
Since , , and satisfies the relation (3.5), the following equalities hold: so .

Theorem 3.2 mainly discusses the available for a convergent ACSCS iteration for any given nonnegative . It also shows that the choice of is dependent on the minimum eigenvalue of the circulant matrix and the skew circulant matrix and the choice of . Notice that then we remark that for any the available exists. And if and are large, then the restriction put on is loose. The bound on of the convergence rate depends on the spectrum of and and the choice of and . Moreover, is also an upper bound of the contraction of the ACSCS iteration.

Moreover, from the proof of Theorem 3.2 we can simplify the bound as

In the following lemma, we list some useful relations related to the minimum and maximum eigenvalues of matrices and , which are essential for us to obtain the optimal parameters and and to describe their properties.

Lemma 3.3. Let , and , , then the following relations hold:

Proof. The equalities follow from straightforward computation.

Theorem 3.4. Let , , be the matrices defined in Theorem 3.2 and , , , be defined as Lemma 3.3. Then the optimal should be and they satisfy the relations And the optimal bound is

Proof. From Theorem 3.2 and (3.8) there exist a and such that respectively. In order to minimize the bound in (3.10), the following equalities must hold: By using , , , , the above equalities can be rewritten as These relations imply that By putting , the parameters and will be the roots of the quadratic polynomial Solving this equation we get the parameters and given by (3.16) and (3.17), respectively. These parameters and can be considered as optimal parameters if they satisfy the relations (3.18)–(3.20).
From (3.12), (3.15) and (3.11), (3.15), we have respectively. From these inequalities, by the definition of and simple computation, we get . By similarity computation, we can also show that . So, the parameters and satisfy the relations (3.18) and (3.19).
Moreover, for the optimal parameter and , we have By similarity computation, we obtain . So, the parameters and satisfy the relation (3.20).
Finally, by denoting and substituting and in (3.10) and using the relations (3.11)–(3.15), we obtain the optimal bound

Remark 3.5. We remark that if the eigenvalues of matrices and contain in , and we estimate , as [6], by then by taking , we obtain which are the same as those given in [6] for Hermitian positive definite matrix .

#### 4. ACSCS Iteration Method for the BTTB Matrices

In this section we extend our method to block-Toeplitz-Toeplitz-block (BTTB) matrices of the form with Similar to the Toeplitz matrix, the BTTB matrix possesses a splitting [11]: where is a block-circulant-circulant-block (BCCB) matrix, is a block-circulant-skew-circulant-block (BCSB) matrix, is a block-skew-circulant-circulant-block (BSCB) matrix, and is a block-skew-circulant-skew-circulant block (BSSB) matrix. We note that the matrices , , , and can be diagonalized by , , , and , respectively. Therefore, the systems of linear equations with coefficient matrices , , , and , where for are positive constants, can be solved efficiently using FFTs. The total number of operations required for each step of the method is where is the size of the BTTB matrix . Based on the splitting of given in (4.3), the ACSCS iteration is as follows.

The ACSCS iteration method for BTTB matrix: given an initial guess , for , until converges, compute where , are given positive constants.

In the sequel, we need the following definition and results.

Definition 4.1 (see [12]). A splitting is called -regular if is Hermitian positive definite.

Theorem 4.2 (see [13]). Let be Hermitian positive definite. Then is a -regular splitting if and only if .

Lemma 4.3 (see [14]). Suppose be two Hermitian matrices, then where and denote the minimum and the maximum eigenvalues of matrix , respectively.

Now we give the main results as follows.

Theorem 4.4. Let be a Hermitian positive definite BTTB matrix, and , , , and be its BCCB, BCSB, BSCB, and BSSB parts, and , be positive constants. Then the iteration matrix of the ACSCS method for BTTB matrices is And if then the spectral radius , and the ACSCS iteration converges to the unique solution of the system of linear equations (1.1) for all initial vectors .

Proof. By the definitions of BCCB, BCSB, BSCB, and BSSB parts of , the matrices , , , and are Hermitian. Let us consider the Hermitian matrices Since is Hermitian positive definite, it follows that By the assumptions (4.7) and Lemma 4.3, we have also The relations (4.9) and (4.10) imply that , for . So, the matrices , for , are nonsingular and we get (4.6). In addition, the splittings are -regular and by Theorem 4.2, we have Finally, by using (4.11), we can obtain which completes the proof.

#### 5. Numerical Experiments

In this section, we compare the ACSCS method with CSCS and CG methods for 1D and 2D Toeplitz problems. We used the vector of all ones for the right-hand side vector . All tests are started from the zero vector, performed in MATLAB 7.6 with double precision, and terminated when or when the number of iterations is over 1000. This case is denoted by the symbol “−”. Here is the residual vector of the system of linear equation (1.1) at the current iterate , and is the initial one.

For 1D Toeplitz problems (Examples 5.15.3), our comparisons are done for the number of iterations of the CG, CSCS, and ACSCS methods (denoted by “IT”) and the elapsed CPU time (denoted by “CPU”). All numerical results are performed for . The corresponding numerical results are listed in Tables 14. In these tables and represent the minimum eigenvalue of matrices and , respectively. Note that the CPU time in these tables does not account those for computing the iteration parameters. For ACSCS method, and are computed by (3.16) and (3.17), respectively. And for CSCS method, is computed by (3.34).

Table 1: Numerical results of Example 5.1.
Table 2: Numerical results of Example 5.2.
Table 3: Numerical results of Example 5.3 for , .
Table 4: Numerical results of Example 5.3 for , .

Example 5.1 (see [10]). In this example is symmetric positive definite Toeplitz matrix, with generating function , . Numerical results for this example are listed in Table 1.

Example 5.2 (see [15]). Let be a Hermitian positive definite Toeplitz matrix, The associated generating function is , . Numerical results of this example are presented in Table 2.

Example 5.3 (see [10]). In this example is Hermitian positive definite Toeplitz matrix, with generating function where and are the maximum and minimum values of on , respectively. In Tables 3 and 4, numerical results are reported for , and , , respectively.

In the following, we summarize the observation from Tables 14. In all cases, in terms of CPU time needed for convergence, the ACSCS converges at the same rate that the CG method converges. However, the number of ACSCS iterations is less than that of CG iterations required for convergence. The convergence behavior of ACSCS method, in terms of the number of iterations and CPU time needed for convergence, is similar to that of CSCS method when and are positive and not too small (all the cases in Tables 1 and 2). Moreover, we observe that, when and are too small or negative (the cases in Table 3 and the cases in Table 4), the ACSCS method converges at the same rate that the CG converges, but the CSCS method does not converge. These results imply that the computational efficiency of the ACSCS iteration method is similar to that of the CG method and is higher than that of the CSCS iterations.

For 2D Toeplitz problems, we tested three problems of the form given in (4.1) with the diagonals of the blocks . The diagonals of are given by the generating sequences (see [10])(a), ,(b), ,(c), .

The generating sequences (b) and (c) are absolutely summable while (a) is not. Our comparisons are done for the number of iterations of the CG, CSCS, and ACSCS methods (denoted by “IT”). All numerical results are performed for . The corresponding numerical results are listed in Table 5. For ACSCS method, parameters , are used. For CSCS method, we used . Table 5 shows that, in all cases, the number of ACSCS iterations required for convergence is less than that of CSCS method and more than that of CG method. We mention that the relations (4.7) are sufficient conditions for convergence of ACSCS iteration for BTTB matrices. The numerical experiments show that the convergence behavior of ACSCS method, in terms of the number of iterations needed for convergence, is better than that of CG and CSCS methods if one of the parameters , is chosen less than the corresponding lower bound given in Theorem 4.2. Table 6 presents the results which are obtained for the ACSCS method with , , , and . Table 7 presents the results obtained for CSCS method with the optimal parameter , obtained computationally by trial and error. As we observe, from Tables 57, the number of ACSCS iterations required for convergence is less than that of CG and CSCS methods.

Table 5: Numerical results of 2D examples.
Table 6: Numerical results of 2D examples for ACSCS method with .
Table 7: Numerical results of 2D examples for CSCS method with optimal .

These results imply that the computational efficiency of the ACSCS iteration method is similar to that of the CG method and is higher than that of the CSCS iterations.

#### 6. Conclusion

In this paper, a new iteration method for the numerical solution of Hermitian positive definite Toeplitz systems of linear equations has been described. This method, which called ACSCS method, is a two- (four-) parameter generation of the CSCS of Ng for 1D (2D) problems and is based on the circulant and the skew circulant splitting of the Toeplitz matrix. We theoretically studied the convergence properties of the ACSCS method. Moreover, the contraction factor of the ACSCS iteration and its optimal parameters are derived. Theoretical considerations and numerical examples indicate that the splitting method is extremely effective when the generation function is positive. Numerical results also showed that the computational efficiency of the ACSCS iteration method is similar to that of the CG method and is higher than that of the CSCS iteration method.

#### Acknowledgment

The authors would like to thank the referee and the editor very much for their many valuable and thoughtful suggestions for improving this paper.

#### References

1. P. Delsarte and Y. V. Genin, “The split Levinson algorithm,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 34, no. 3, pp. 470–478, 1986.
2. N. Levinson, “The Wiener RMS (root mean square) error criterion in filter design and prediction,” Journal of Mathematics and Physics, vol. 25, pp. 261–278, 1947.
3. W. F. Trench, “An algorithm for the inversion of finite Toeplitz matrices,” vol. 12, pp. 515–522, 1964.
4. G. S. Ammar and W. B. Gragg, “Superfast solution of real positive definite Toeplitz systems,” SIAM Journal on Matrix Analysis and Applications, vol. 9, no. 1, pp. 61–76, 1988.
5. G. Strang, “Proposal for Toeplitz matrix calculations,” Studies in Applied Mathematics, vol. 74, no. 2, pp. 171–176, 1986.
6. M. K. Ng, “Circulant and skew circulant splitting methods for Toeplitz systems,” Journal of Computational and Applied Mathematics, vol. 159, no. 1, pp. 101–108, 2003.
7. C. Gu and Z. Tian, “On the HSS iteration methods for positive definite Toeplitz linear systems,” Journal of Computational and Applied Mathematics, vol. 224, no. 2, pp. 709–718, 2009.
8. Z. Z. Bai, G. H. Golub, and M. K. Ng, “Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems,” SIAM Journal on Matrix Analysis and Applications, vol. 24, no. 3, pp. 603–626, 2003.
9. T. K. Ku and C. C. J. Kuo, “Design and analysis of Toeplitz preconditioners,” IEEE Transactions on Signal Processing, vol. 40, no. 1, pp. 129–141, 1992.
10. M. K. Ng, Iterative Methods for Toeplitz Systems, Oxford University Press, New York, NY, USA, 2004.
11. R. H. Chan and M. K. Ng, “Conjugate gradient methods for Toeplitz systems,” SIAM Review, vol. 38, no. 3, pp. 427–482, 1996.
12. J. M. Ortega, Numerical Analysis. A Second Course, Academic Press, New York, NY, USA, 1972.
13. A. Frommer and D. B. Syzld, “Weighted max norms, splittings, and overlapping additive Schwarz iterations,” Numerische Mathematik, vol. 5, p. 4862, 1997.
14. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, New York, NY, USA, 1985.
15. R. H. Chan, “Circulant preconditioners for Hermitian Toeplitz systems,” SIAM Journal on Matrix Analysis and Applications, vol. 10, no. 4, pp. 542–550, 1989.