`Journal of Applied MathematicsVolume 2012 (2012), Article ID 647623, 15 pageshttp://dx.doi.org/10.1155/2012/647623`
Research Article

## Computing the Square Roots of a Class of Circulant Matrices

Department of Mathematics, Lishui University, Lishui 323000, China

Received 16 August 2012; Accepted 17 October 2012

Copyright © 2012 Ying Mei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We first investigate the structures of the square roots of a class of circulant matrices and give classifications of the square roots of these circulant matrices. Then, we develop several algorithms for computing their square roots. We show that our algorithms are faster than the standard algorithm which is based on the Schur decomposition.

#### 1. Introduction

Circulant matrices and their generalizations arise in many areas of physics, electromagnetics, signal processing, statistics, and applied mathematics for the investigation of problems with periodicity properties [13]. Also, numerical solutions of certain types of elliptic and parabolic partial differential equations with periodic boundary conditions often involve linear systems with a circulant matrix. For recent years, the properties and applications of them have been extensively investigated [49].

A matrix is said to be a square root of if . The number of square roots varies from two (for a nonsingular Jordan block) to infinity (any involuntary matrix is a square root of the identity matrix). The key roles that the square root plays in, for example, the matrix sign function, the definite generalized eigenvalue problem, the polar decomposition, and the geometric mean, make it a useful theoretical and computational tool. The rich variety of methods for computing the matrix square root, with their widely differing numerical stability properties, is an interesting subject of study in their own right [10]. For these reasons, many authors became interested in the matrix square roots [1114]. Although the theory of matrix square roots is rather complicated, simplifications occur for certain classes of matrices. Consider, for example, Hamiltonian matrices [15], semidefinite matrices [16], and so forth.

This paper is organized as follows. In Section 2, we review some basic properties of a class of circulant matrices. In Section 3, we investigate the structures of the square roots, then give the classifications of all the square roots of these circulant matrices. In Section 4, we develop some algorithms to compute the primary square roots of them. Finally, we present several numerical experiments in Section 5, exhibiting the efficiency of the proposed algorithms in terms of CPU time.

#### 2. Preliminaries

Throughout this paper we denote the set of all complex matrices by and the set of all real matrices by .

Definition 2.1 (see [4]). Let and . In a -circulant matrix

Another equivalent definition of a -circulant matrix is as follows: is a -circulant matrix if and only if , where .

Let be an even number, is a skew -circulant matrix if and a Hermitian -circulant matrix if , where denotes the elementwise conjugate of the matrix .

If the circulant matrix is similar to a block diagonal matrix (even a diagonal matrix), that is, if there exists an invertible matrix such that is a block diagonal matrix, the problem of computing the square roots of a circulant matrix can be reduced to that of computing the square roots of some small size matrices. It will help to reduce the costs of computation.

Lemma 2.2. If , then all the matrices in are simultaneously diagonalizable, with the eigenvectors determined completely by and a primitive th root of unity.

We first review the structure and reducibility of above matrices. All the formulas become slightly more complicated when is odd and is a complex number; for simplicity, we restrict our attention to the case of even and . Using the partition, the   -circulant matrix can be described as where and are matrices.

For a skew -circulant matrix , its partition can be expressed as follows: if is odd, and if is even,the matrix is of the same form as (2.2).

Define where and is the th unit matrix.

It can easily get

By applying (2.2), (2.4), and (2.6), we have the following.

Lemma 2.3. If is an k-circulant matrix, then where ,  .

By applying (2.2)–(2.6), we have the following.

Lemma 2.4. Let A be a skew -circulant matrix. If is odd, then where and . If is even, is of the same form as (2.7).

For a Hermitian -circulant matrix , its partition can be expressed as follows: if is odd, and if is even, the matrix is of the same form as (2.2).

Define where is the imaginary unit.

It can easily get

By applying (2.2), (2.4), (2.6), (2.9), (2.10), and (2.11), we have the following.

Lemma 2.5. Let be an Hermitian -circulant matrix and be defined by the relation (2.10). If is odd, then is an real matrix, where with and denoting the real and imaginary parts of the matrix , respectively. If is even, then is of the same form as (2.7).

Definition 2.6. Given a square matrix , a function is said to be defined on the spectrum of if and its derivatives of order are defined at each element of . Here, is the size of the largest Jordan block of . If is the interpolating polynomial of all of these values, the function of is defined to be .

For instance, the square roots of a matrix whose eigenvalues belong to exactly one Jordan block are functions of . These are the primary square roots. On the other hand, choices of different branches of the square root for each Jordan block lead to the so-called nonprimary matrix square root functions of .

In most applications, it is primary square roots that are of interest and simplicity, virtually all the existing theory and available methods are for such square roots [10]; thus, we will concentrate on primary roots in this paper.

Lemma 2.7 (see [10]). Let the nonsingular matrix have the Jordan canonical form , and let be the number of distinct eigenvalues of . Let where and or 2 denotes the branch of f. Then has precisely square roots that are primary functions of , given by corresponding to all possible choices of , or 2, subject to the constraint that whenever .
If , has nonprimary square roots. They form parametrized families where or 2, is an arbitrary nonsingular matrix that commutes with , and for each there exist and , depending on , such that while .

#### 3. Square Roots

In this section we present some new results which characterize the square roots of the circulant matrices.

##### 3.1. Square Roots of -Circulant Matrices

It is known that the product of two -circulant matrices is -circulant, however, whether a -circulant matrix has square roots which are also -circulant or not? We have some answers to this question.

Theorem 3.1. Let be a nonsingular -circulant matrix and let , where are the primary functions of . Then all square roots are -circulant matrices.

Proof. By assumption, and , here , which is clearly defined on the spectrum of , including the case that the eigenvalues of the matrix are complex numbers. From Definition 2.6, we can construct a polynomial such that . Using the fact that the sum and product of two -circulant matrices are also -circulant, the polynomial is a -circulant matrix.

By Lemma 2.7 and Theorem 3.1, we get that any nonsingular -circulant matrix always has a -circulant square root.

In fact, if , then implies . That means -circulant matrices may have square roots which are skew -circulant. Although it is unknown whether it is true, if it has a skew -circulant matrix, we have the following.

Theorem 3.2. Let be a nonsingular -circulant matrix. Assume that has a skew -circulant square root. (i) If is even, then each and in (2.7) admits a square root, respectively. (ii) If is odd, then the matrix and in (2.7) are similar.

Proof. let be a skew -circulant matrix and .
(i) If is even, by Lemma 2.4, we have . That implies hold simultaneously, that is and are square roots of and , respectively.
(ii) If is odd, by Lemma 2.4 again, we have .
Note that means that and . Therefore, and are both nonsingular (due to the nonsingularity of and ) and . That is to say, and in (2.7) are similar.

In general, a nonsingular -circulant matrix , besides the -circulant square roots, possibly has other kinds of square roots (e.g., skew -circulant square roots), which are nonprimary functions of . The existence and the families of square roots depend on the spectrums of and . The following theorem gives a classification of all the square roots of a nonsingular -circulant matrix.

Theorem 3.3. Let the nonsingular -circulant matrix has distinct eigenvalues   , then has -circulant square roots that are the primary functions of , given by corresponding to all possible choices of ,   or 2, subject to the constraint that whenever .
If , has nonprimary square roots. They form parametrized families where or 2,   is an arbitrary nonsingular matrix that commutes with , and for each there exist and , depending on , such that while .

Proof. According to the hypothesis, has distinct eigenvalues. Then by Lemma 2.7, has square roots which are primary functions of and take the form (3.1). By Theorem 3.1, the square roots are -circulant matrices. By Lemma 2.7 again, we get the form (3.2).

Theorem 3.3 shows that the square roots of a nonsingular -circulant matrix consist of two classes. The first class comprises finitely many primary square roots which are “isolated,” and they are -circulant matrices. The second class, which may be empty, comprises a finite number of parametrized families of matrices, each family containing infinitely many square roots sharing the same spectrum, and the square roots in this class maybe -circulant matrices or not.

##### 3.2. Square Roots of Skew -Circulant Matrices

If is an nonsingular skew -circulant matrix, let be an eigenpair of . From we get which means that the eigenvalues of must appear in ± pairs, and has a Jordan decomposition of the following form: with where are matrices such that , and is a forward shift matrix.

Assume that has distinct eigenvalues. We have the following result.

Theorem 3.4. Let the nonsingular skew -circulant matrix have the Jordan decomposition (3.3), and let be the number of distinct eigenvalues of . Then has square roots, which are primary functions of , taking the following form: where is a primary square root of and is a primary square root of , respectively.
If , then has nonprimary square roots which are of parameterized families in the following form: where is an arbitrary nonsingular matrix which commutes with .

Proof. The proof consists in using Lemma 2.7 again and the fact that has distinct eigenvalues and Jordan blocks.

Let us consider how to compute the primary square roots of a nonsingular skew -circulant matrix. If is even, from Lemma 2.4, we can see that skew -circulant matrices have the same deduced form (2.7). We can use Algorithm 4.2 (in Section 4) in this case. If is odd, we have that takes the form (2.8). Exploiting this form, we have the following theorem.

Theorem 3.5. Let be a nonsingular skew -circulant matrix and let be odd. Assume that and are partitioned as follows: which are conformable with the partition of in (2.3). Then is a square root of if and only if(A) is a square root of ;(B) is a fourth root of ;(C) is a fourth root of ;(D) is a solution of .(E).
hold simultaneously, where and , are defined by the relation (2.4) and (2.8), respectively.

Proof. The proof is similar to that of Theorem  3.5; see [14].

##### 3.3. Square Roots of Hermitian -Circulant Matrices

In fact, if , then implies . That means Hermitian -circulant matrices may have square roots which are still Hermitian -circulant. Although it is unknown whether it is true, if it has a Hermitian -circulant square root, we have the following.

Theorem 3.6. Let be a nonsingular Hermitian -circulant matrix, assume that has a Hermitian -circulant square matrix. (i) If is even, then each and in (2.7) admits a square root, respectively. (ii) If is odd, then ’s reduced form in (2.12) has a real square root.

Proof. (i) If is even, the case is similar to Theorem 3.2.
(ii) If is odd, let be a Hermitian -circulant matrix and . Then, by Lemma 2.5, we have that and are real and , where is defined in (2.10). This means that has a real square root.

In the following, we give a classification of the square roots of a nonsingular Hermitian -circulant matrix . Assume that is a nonsingular Hermitian -circulant matrix, is an eigenvalue of , and is a eigenvector corresponding to , that is, . Because , we have that , which means the complex eigenvalues of must appear in conjugate pairs, and has a Jordan decomposition of the following form: with where is the real Jordan block corresponding to real eigenvalues for ; is the Jordan block corresponding to complex eigenvalues .

Theorem 3.7. Let the nonsingular Hermitian -circulant matrix have the Jordan decomposition (3.9). Assume that be the number of distinct real eigenvalues of , and be the number of distinct complex conjugate eigenvalue pairs.
If or , then has square roots which are primary functions of .
If , then has square roots which are nonprimary functions of ; they form parameterized families.

Proof. The proof is similar to that of Theorem 3.4.

It is showed in Theorem 3.1 that all the primary square roots of a nonsingular -circulant matrix are -circulant. But for nonsingular Hermitian -circulant matrices, this conclusion does not hold anymore in general. However, if a square root of a nonsingular Hermitian -circulant matrix is a real coefficient polynomial in , then this conclusion holds.

Theorem 3.8. Let be a nonsingular Hermitian -circulant matrix, then all square roots of which are polynomials in with real coefficients (if exist) are Hermitian -circulant matrices.

Proof. Using the fact that the sum and product of two Hermitian -circulant matrices are also Hermitian -circulant, we complete the proof.

#### 4. Algorithms

In this section we will propose algorithms for computing the primary square roots of the circulant matrix in Section 3, which are primary functions of .

Algorithm 4.1. Computes a primary square root of a nonsingular -circulant matrix .

Step 1. Compute the eigenvalues of .

Step 2. Compute the square roots .

Step 3. Compute ,  .

Then, we obtain .

The cost of Step 1 is about flops by discrete Fourier transform. The cost of Step 2 is . The cost of Step 3 is about flops by inverse discrete Fourier transform. So, it needs about flops in total if we use the fast Fourier transform to compute a primary square root of a -circulant matrix (see Table 1).

Table 1: Flops of Algorithm 4.1.

Algorithm 4.2. Computes a primary square root of a nonsingular skew -circulant matrix ( is even).

Step 1. Compute the reduced form in (2.7).

Step 2. Compute the Schur decompositions and , respectively, where and are two upper triangular matrices.

Step 3. Compute the upper triangular square roots and , where has distinct eigenvalues and so does , here is defined on .

Step 4. Compute and .

Step 5. Obtain .

The costs of Steps 1 and 5 in Algorithm 4.2 are about flops. The main costs arise in the implementation of Steps 24. Those are about flops (see Table 2). It needs about flops in total if we use the Schur method (Algorithm  6.3 in [10]) to compute a primary square root of directly, that means Algorithm 4.2 is about four times cheaper than the standard Schur method.

Table 2: Flops of Algorithm 4.2.

Algorithm 4.3. Computes a primary square root of a nonsingular skew -circulant matrix ( is odd).

Step 1. Compute the reduced form in (2.8).

Step 2. Compute the Schur decomposition , where is upper triangular.

Step 3. Compute the upper triangular fourth root , where is defined on , then compute .

Step 4. Solve the Sylvester equation .

Step 5. Compute .

Step 6. Compute and .

Step 7. Compute .

Step 8. Form according to (3.8).

Step 9. Obtain .

The costs of Steps 1 and 9 in Algorithm 4.3 are about flops. The main costs are to implement Steps 28. In Step 2, it takes about flops for computing matrix-matrix multiplication and for computing the Schur decomposition of . In Step 3, it takes about flops to compute the upper triangular fourth root (use Step 3 in Algorithm 4.2 twice) and to form . The cost of Step 4 amounts to about flops, see Bartels-Stewart Algorithm in [17]. In Steps 5 and 6, it needs to compute 4 matrix-matrix multiplications, which requires about flops. Step 7 involves a matrix-matrix multiplication and a solution of a linear system of equations with multiple right-hand sides, which needs about flops. Thus, the whole sum is about flops (see Table 3), which means Algorithm 4.3 is about 5 times cheaper than the standard Schur method.

Table 3: Flops of Algorithm 4.3.

Let be odd (when is even, we can use Algorithm 4.2 to compute a primary square root of a nonsingular Hermitian -circulant matrix).

Algorithm 4.4. Computes a primary square root of a nonsingular Hermitian -circulant matrix .

Step 1. Compute the reduced form in (2.12).

Step 2. Compute the real Schur decomposition , where is a upper quasitriangular matrix.

Step 3. Compute (see [8] for more details), where is upper quasitriangular with distinct eigenvalues and is defined on .

Step 4. Compute .

Step 5. Compute .

The costs of Steps 1 and 5 in Algorithm 4.4 are about flops. The main costs are to implement Steps 24. In Step 2, it takes about real flops for computing the real Schur decomposition of . In Step 3, it takes about real flops for . The cost of Step 4 amounts to about to form . Thus, the whole sum is about real flops (see Table 4). Note the fact that a complex addition is equivalent to two real additions and a complex multiplication is equivalent to four real multiplications and plus two real additions. So Algorithm 4.4 is approximately eight times cheaper than the standard Schur method.

Table 4: Real flops of Algorithm 4.4.

#### 5. Numerical Experiments

We present numerical experiments for the comparison of the algorithms presented in this paper and the standard Schur method with respect to execution time.

All of our computations have been done using MATLAB 7.6.0(R2008a) with unit roundoff and executed in an Intel Pentium M Processor 740, 1.73 GHz with 1 GB of RAM.

The execution (CPU) time for square roots with respect to (order of matrix) for Algorithm 4.1 and the standard Schur method is shown in Figure 1.

Figure 1: CPU time for Algorithm 4.1 and the standard Schur method in logarithmic scale.

The execution (CPU) time for square roots with respect to for Algorithm 4.2 and the standard Schur method is shown in Figure 2.

Figure 2: CPU time with for Algorithm 4.2 and the standard Schur method in logarithmic scale.

The execution (CPU) time for square roots with respect to for Algorithm 4.3 and the standard Schur method is shown in Figure 3.

Figure 3: CPU time with for Algorithm 4.3 and the standard Schur method.

The execution (CPU) time for square roots with respect to for Algorithm 4.4 and the standard Schur method is shown in Figure 4.

Figure 4: CPU time with for Algorithm 4.4 and the standard Schur method in logarithmic scale.

It is evident by the statements of Figures 14, the algorithms are clearly faster than the standard Schur methods for computing the circulant matrices in this paper.

#### Acknowledgments

The author would like to thank the editor and the referees for their helpful comments and valuable suggestions for improving this paper. A Project Supported by Scientific Research Fund of Zhejiang Provincial Education Department (No. Y201223607).

#### References

1. P. J. Davis, Circulant Matrices, Chelsea Publishing, New York, NY, USA, 2nd edition, 1994.
2. R. M. Gray, Toeplitz and Circulant Matrices: A Review, Stanford University Press, Stanford, Calif, USA, 2000.
3. A. Mayer, A. Castiaux, and J.-P. Vigneron, “Electronic Green scattering with $n$-fold symmetry axis from block circulant matrices,” Computer Physics Communications, vol. 109, no. 1, pp. 81–89, 1998.
4. R. E. Cline, R. J. Plemmons, and G. Worm, “Generalized inverses of certain Toeplitz matrices,” Linear Algebra and its Applications, vol. 8, pp. 25–33, 1974.
5. G. R. Argiroffo and S. M. Bianchi, “On the set covering polyhedron of circulant matrices,” Discrete Optimization, vol. 6, no. 2, pp. 162–173, 2009.
6. N. L. Tsitsas, E. G. Alivizatos, and G. H. Kalogeropoulos, “A recursive algorithm for the inversion of matrices with circulant blocks,” Applied Mathematics and Computation, vol. 188, no. 1, pp. 877–894, 2007.
7. S. Shen and J. Cen, “On the bounds for the norms of $r$-circulant matrices with the Fibonacci and Lucas numbers,” Applied Mathematics and Computation, vol. 216, no. 10, pp. 2891–2897, 2010.
8. Z. L. Jiang and Z. B. Xu, “A new algorithm for computing the inverse and generalized inverse of the scaled factor circulant matrix,” Journal of Computational Mathematics, vol. 26, no. 1, pp. 112–122, 2008.
9. S. G. Zhang, Z. L. Jiang, and S. Y. Liu, “An application of the Gröbner basis in computation for the minimal polynomials and inverses of block circulant matrices,” Linear Algebra and its Applications, vol. 347, pp. 101–114, 2002.
10. N. J. Higham, Functions of Matrices: Theory and Computation, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2008.
11. G. W. Cross and P. Lancaster, “Square roots of complex matrices,” Linear and Multilinear Algebra, vol. 1, pp. 289–293, 1974.
12. C. R. Johnson, K. Okubo, and R. Reams, “Uniqueness of matrix square roots and an application,” Linear Algebra and its Applications, vol. 323, no. 1–3, pp. 51–60, 2001.
13. M. A. Hasan, “A power method for computing square roots of complex matrices,” Journal of Mathematical Analysis and Applications, vol. 213, no. 2, pp. 393–405, 1997.
14. C. B. Lu and C. Q. Gu, “The computation of the square roots of circulant matrices,” Applied Mathematics and Computation, vol. 217, no. 16, pp. 6819–6829, 2011.
15. K. D. Ikramov, “Hamiltonian square roots of skew-Hamiltonian matrices revisited,” Linear Algebra and its Applications, vol. 325, no. 1–3, pp. 101–107, 2001.
16. R. Reams, “Hadamard inverses, square roots and products of almost semidefinite matrices,” Linear Algebra and its Applications, vol. 288, no. 1–3, pp. 35–43, 1999.
17. G. H. Golub and C. F. van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.