Abstract

The computation of the roots of positive definite matrices arises in nuclear magnetic resonance, control theory, lattice quantum chromo-dynamics (QCD), and several other areas of applications. The Cauchy integral theorem which arises in complex analysis can be used for computing f(A), in particular the roots of A, where A is a square matrix. The Cauchy integral can be approximated by using the trapezoid rule. In this paper, we aim to give a brief overview of the computation of roots of positive definite matrices by employing integral representation. Some numerical experiments are given to illustrate the theoretical results.

1. Introduction

It is well known that contour integrals which form a component of the Cauchy integral theorem have an important role in complex analysis. The trapezoid rule is popular for the approximation of integrals due to its exponential accuracy if particular conditions are satisfied. It has been established in [1] that the trapezoid rule can be used to compute contour integrals to give a powerful algorithm for the computation of matrix functions.

Kellems [1] has studied the case of computing a matrix square root and a matrix exponential function by utilizing the trapezoid rule. In particular, he focused on the matrix exponential and its use in the heat equation. Only a few trapezoid rule points were required for very high accuracy. Davies and Higham [2] have investigated the computation of a matrix-vector product without explicitly computing . Their proposed methods were specific to the logarithm and fractional matrix powers which were based on quadrature and solution of an ordinary differential equation initial value problem, respectively. Hale et al. in [3] have presented new methods for the numerical computation of and , where is a function such as or with singularities in and is a matrix with eigenvalues on or near . The methods in [3] were based on combining contour integrals evaluated using the periodic trapezoid rule with conformal maps involving elliptic functions.

In this paper, we investigate computation of the th roots of positive definite matrices by utilizing integral representation. Our approach is based on the work of Kellems [1] who compute the square root of positive definite matrix using trapezoid rule. The integral identity will be computed by employing trapezoid rule. We also study some matrix factorization procedure and their application in computing matrix roots using trapezoid rule. The outline of this paper is as follows. In Section 2 we introduce some basic definitions and also integral representation of function of matrices and trapezoid rule. In Section 3 we will obtain some formulas to compute the integral representation of the matrix th root. Numerical experiments will be discussed in Section 4, and the conclusions will be presented in Section 5.

2. Approximation of the Matrix pth Roots

The Cauchy integral theorem states that the value of can be evaluated by an integral representation as follows: where is a contour in such that enclose and is an analytic and inside [1]. The generalization of this formula in the matrix case can be presented as and can be defined element by element as follows: where the entries of are analytic on and also is analytic function in the neighborhood of the spectrum of [4]. Cauchy's integral formula can be simplified by considering to be a circle of radius centered at some point , defined by . This substitution gives us the following identity [1]: Writing and substituting into (2.4) gives A primary th root of a square matrix , with a positive integer, is a solution of the matrix equation that can be written as a polynomial of . If has distinct eigenvalues and none of which is zero, then has exactly primary th roots. This result by (2.2) was obtained where is any of the analytic functions defined on the spectrum of , such that and is a closed contour which encloses the spectrum of . If has no nonpositive real eigenvalues, then there exists only one primary th root whose eigenvalues lie in the sector [5]. In this paper, we demonstrate the method for .

In order to calculate the integral (2.2) accurately, we first split the interval of integration into smaller uniform subintervals and then apply the trapezoidal rule on each of them. The composite trapezoidal rule is as follows [6]: where and . Since , (2.6) can be reformulated as For (2.5), let the integrand be the function . If we take equally spaced points on and consider that , then (2.7) can be written as which is the general formula for the computation of a matrix function when is a circle [1].

The function is analytic (i.e., ) everywhere in except at . Consider a matrix which has eigenvalues in the unit disk centered at . The contour is a disk of radius , parameterized as , and from (2.5) we have An important property of the trapezoidal rule approximation is that it has better accuracy than the standard matrix th root algorithms [1].

Now we suppose the Random matrix and use trapezoid rule with and . Figure 1 shows us the convergence of this method for matrices of dimension 4, 8, 16, 32, and 64. In each case exponential accuracy was found. Since the eigenvalues are well clustered, a few points need to be used. For matrices with larger spectral radius or more scattered eigenvalues, the convergence will be slower. This is consistent with the finding in [1] for the case of square root ().

3. Employing Matrix Decompositions

Matrix factorizations are utilized to compute the th root of a matrix using trapezoid rule. Given a square matrix , we are interested as to the simplest form of matrix in or under unitary similarity transform or similarity transform . Matrix presents some information on because many features and structure of matrices are invariant under similarity transform. In this part, three factorizations: Schur, Eigenvalue, and Hessenberg are investigated in relation to the use of the trapezoidal rule.

3.1. Schur Decomposition

One of the most applicable factorization of matrices is Schur decomposition which is presented in the following theorem [4].

Theorem 3.1 (Schur decomposition theorem). Let ; then there exists a unitary such that where and is strictly upper triangular. Further, is a column partitioning of the unitary matrix where is referred to as Schur vectors, and from Schur vectors satisfy

One of the most famous algorithms to compute matrix roots is Smith's algorithm proposed in [7]. Generally, this algorithm is presented as follows.(i)Compute the Schur factorization .(ii)Matrix is upper triangular and so we then set .(iii)Else operate column-by-column on to produce the upper triangular th root matrix .

This algorithm uses flops in total. The matrix th root is given as . It can be verified that This can be used to speed up the trapezoid rule method: implement a preliminary factorization of , operate on the factored matrix, and then combine the factors at the end of the computation [1]. Using the Schur factorization and the unitary of , we then have Using this in (2.9) gives

3.2. Eigenvalue Decomposition

This factorization is also called spectral decomposition and is presented as follows [4].

Theorem 3.2 (Eigenvalue decomposition theorem). Let ; there exists a nonsingular which can diagonalize if and only if the geometric multiplicities of all eigenvalue are equal to their algebraic multiplicities.
Utilizing the property of , one has Replacing this into (2.9) yields

3.3. Hessenberg Decomposition

This factorization is also called spectral decomposition and is presented as follows [4].

Theorem 3.3 (Hessenberg decomposition theorem). Let ; then there exists a orthogonal matrix such that where is a Hessenberg matrix which means that the elements below the subdiagonal are zero.
Applying the Hessenberg factorization and the orthogonality of , one can write Substituting this into (2.9) will give

4. Numerical Experiments

In this section we present some numerical experiment to illustrate the theory which is developed. All the computations have been carried out using MATLAB 7.10(Ra). We assume positive definite matrices with positive nonzero eigenvalues. These matrices, which are given in MATLAB gallery, are used to compute roots of matrices. Recall that if is approximated value of by using different methods, then the absolute error and residual errors can be considered as follows: respectively, where is Frobenius norm.

Test 1
For the first experiment, consider Random matrix in the form which has positive eigenvalues. We estimate the th root of for different values of using trapezoid rule. Furthermore, the Schur, eigenvalue, and Hessenberg factorizations are utilized for speeding up the computations. Moreover, the residual error and used time in proposed method for the computation of matrix th root, for , and 2012, are compared. The results are observed in Tables 1 and 2. It should be mentioned that the number of the point in trapezoid rule is considered as . As can be shown in the result, the Schur factorization has almost exactly the same error as MATLAB's algorithm. The Hessenberg factorization gives the best accuracy among the three methods. In fact, the Hessenberg factorization is quicker than some algorithms in MATLAB. The trapezoid rule for the computation of the matrix th root may be more effective than several other algorithms but this is dependent on the spectrum of . This is consistent with the finding in [1] for the case of square root.

Test 2
In this example consider Random, Hilbert, and Lehmer matrices. We first fix the value of and then use trapezoid rule to compute matrix th root. The relation between the number of points in trapezoid rule and obtained absolute errors is investigated. In the implementation, is supposed and the number of points is increased as , for . Comparison between errors for various matrices is illustrated in Table 3. It can be seen that errors by increasing the number of points in trapezoid rule will be reduced.

Test 3
In the last test Random matrix, Lehmer matrix, and Hilbert matrix are supposed. We have computed the 5th, 17th, 64th, and 128th root of these matrices using trapezoidal rule and Smith's algorithm. Furthermore, in this example the absolute errors are estimated. As shown in the figures, the accuracy is measured in either the Frobenius norm or the 2-norm for different matrices. The relations between dimension of and absolute errors and also time in seconds are demonstrated in these figures. Figures 27 show the comparison of the residual errors and timings for these methods.
According to Figure 2 which has four parts, the absolute error in all cases (, and 128) in trapezoid rule are smaller than Smith's method. In addition, in Figure 3 it is illustrated that except the case of , the time of computation of the th root using Smith's method is longer than trapezoidal rule. Also, Figures 4 and 5 are given difference between error and also time in trapezoid rule and Smith's method for Hilbert matrix. The residual error for trapezoid rule is large while for Smith's algorithm is small. For Hilbert matrix except case (), in all cases Smith's method is more time consuming than trapezoid rule. Finally, Figures 6 and 7 show the computation of the th root of Lehmer matrix which in the most cases reveal that trapezoid rule is more expensive in time and also error than Smith's method by using points. It must be mentioned that by increasing the number of points, considerable accurate solution can be obtained. For example, using 220 points in trapezoid rule can achieve error of in the last experiment.

5. Conclusion

In this paper, we have studied the use of trapezoidal rule in conjunction with the Cauchy integral theorem to compute the th roots of matrices. It was found that the technique is feasible and accurate.

Acknowledgments

The authors would like to thank Dr. Anthony Kellems and the anonymous referees for their helpful comments which improved the presentation.