International Journal of Mathematics and Mathematical Sciences

Volume 2014, Article ID 613840, 8 pages

http://dx.doi.org/10.1155/2014/613840

## Inversion Free Algorithms for Computing the Principal Square Root of a Matrix

^{1}Department of Electronic Engineering, Technological Educational Institute of Central Greece, 3rd km Old National Road Lamia-Athens, 35100 Lamia, Greece^{2}Department of Computer Science and Biomedical Informatics, University of Thessaly, 2-4 Papasiopoulou Street, 35100 Lamia, Greece

Received 13 February 2014; Revised 13 May 2014; Accepted 13 May 2014; Published 18 June 2014

Academic Editor: Ram U. Verma

Copyright © 2014 Nicholas Assimakis and Maria Adam. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

New algorithms are presented about the principal square root of an matrix . In particular, all the classical iterative algorithms require matrix inversion at every iteration. The proposed inversion free iterative algorithms are based on the Schulz iteration or the Bernoulli substitution as a special case of the continuous time Riccati equation. It is certified that the proposed algorithms are equivalent to the classical Newton method. An inversion free algebraic method, which is based on applying the Bernoulli substitution to a special case of the continuous time Riccati equation, is also proposed.

#### 1. Introduction

Let be a real or complex matrix with no eigenvalues on (the closed negative real axis). Then there exists a unique matrix such that and the eigenvalues of lie in the segment . We refer to as the* principle square root* of .

The computation of the principal square root of a matrix is a problem of great interest and practical importance with numerous applications in mathematical and engineering problems. Due to the importance of the problem, many iterative algorithms have been proposed and successfully employed for calculating the principal square root of a matrix [1–7] without seeking the eigenvalues and eigenvectors of the matrix; these algorithms require matrix inversion at every iteration. Blocked Schur Algorithms for Computing the Matrix Square Root are proposed in [8] where the matrix is reduced to upper triangular form and a recurrence relation enables the square root of the triangular matrix to be computed a column or superdiagonal at a time. In this paper new inversion free iterative algorithms for computing the principal square root of a matrix are proposed.

The paper is organized as follows: in Section 2 a survey of classical iterative algorithms is presented; all the algorithms use matrix inversion in every iteration. In Section 3, the inversion free iterative algorithms are developed; the algorithms are derived by the Schulz iteration for computing the inversion of a matrix or by eliminating matrix inversion from the convergence criteria of the algorithms. In Section 4, an algebraic method for computing the principal square root of a matrix is presented; the method is associated with the solution of a special case of the continuous time Riccati equation, which arises in linear estimation [9] and requires only one matrix inversion (this is not an iterative method). Simulation results are presented in Section 5. Section 6 summarizes the conclusions.

#### 2. Inversion Use Iterative Algorithms

In this section, a survey of classical iterative algorithms, which use matrix inversion in every iteration is presented. Among the algorithms, which have been proposed for computing the principle square root of a matrix, the Newton methods have been studied for many years. Since the convergence is proved but numerical instability has been observed [4], several Newton variants have been derived. Newton methods have also been used for computing the th root a matrix [1, 2]. The conclusion that a nonsingular and diagonalizable matrix for any positive integer has an th root and is stated in [10].

*Algorithm 1(a).* This algorithm is the classical Newton method [6]
Note that we are also able to use the initial condition .

Note that all the algorithms are applicable for and that the convergence is achieved, when , where is the sequence that converges to the principle square root of and is a small positive number and denotes the spectral norm of the matrix (i.e., the largest singular value of ).

*Algorithm 2(a)*. This algorithm is a variant of the classical Newton method [7]
Note that we are also able to use the initial condition .

*Algorithm 3(a)*. This algorithm is a variant of the classical Newton method, where the principle square root of is computed as well [6]

*Algorithm 4(a)*. This algorithm is proposed in [7]

*Algorithm 5(a)*. This algorithm is proposed in [5]

*Algorithm 6(a)*. It will be shown that the problem of computing the principal square root of a matrix is equivalent to the problem of solving a related Riccati equation. This algorithm takes advantage of this relation and is derived via the solution of the related Riccati equation.

The problem of computing the principal square root of a matrix is associated with a well-known problem of estimation theory, namely the problem of solving a continuous time Riccati equation. The Riccati equation arises in linear estimation [3], namely in the implementation of the Kalman-Bucy filter and it is formulated as where is the filtering error-covariance matrix, and are the system dynamic and output matrices, respectively. and are the plant and measurement noise covariance matrices, respectively.

In the special case of the time invariant model where , , , and , and using the initial condition , the Riccati equation of interest takes the following form It is well known [11, 12] that there exists a steady state solution of the Riccati equation.

The following statement is now obviously true: “The problem of computing the principal square root of matrix is equivalent to the problem of solving the Riccati equation (7), the steady state solution of which is equal to the principal square root of matrix ”.

The idea is to apply the Bernoulli substitution [11] in the special case of the continuous time Riccati equation (7): an integration free solution of (7) can be obtained using the Bernoulli substitution [11] Substituting (8) in (7), we have Then, by discretising the above equations, we have: Thus, by applying the Bernoulli substitution [11] in a special case of the continuous time Riccati equation, the following algorithm is derived Note that the following variant uses iterations only for the first quantity involved

*Algorithm 7(a)*. This algorithm is derived via the solution of the related Riccati equation using the doubling principle [12].

It is obvious that (10) and (11) can be easily written in a state space form as follows Let us denote: Then, using the doubling principle [12], that is, calculating the power it is easy to prove that the following equations hold Due to the form of matrix given in (16), it is obvious that the matrices and are polynomials in of the form where and are scalar coefficients and and are functions of the number of iterations, which need not be specified. Then, directly from the above equation we derive since Furthermore, it is easily seen from (15) and (16) that the following state space equation holds Then, it is obvious that: Then, using (18), (20)–(22) and (24)-(25), we have Thus, by using the doubling principle [12], the following algorithm is derived

*Algorithm 8(a)*. In the following, using (22) and (26)–(28), we derive
with initial condition
Note that, where we are obliged to use the initial condition by (31), which is derived from (26)–(28) instead of using the initial condition in (28), due to the fact that (30) requires inversion of matrix and is a singular matrix.

At this point, we are able to use (30) with initial condition , resulting to the same sequence of as in (26)–(28).

Then, it is obvious that the algorithm in (26)–(28) can be written in the following equivalent form:

*Remark 1. * All the above algorithms are equivalent to each other. In fact, the relations between the quantities involved in the above algorithms hold:
The proof is derived by induction and is trivial.

The relationship between the Riccati equation and the principal square root algorithms is certified: it is shown that the algorithm derived by solving a special case of the continuous time Riccati equation is equivalent to the classical Newton method.

#### 3. Inversion Free Iterative Algorithms

In this section, the inversion free iterative algorithms are developed by using the Schulz iteration for computing the inversion of a matrix or by eliminating matrix inversion from the convergence criteria of the algorithms.

*Algorithm 1(b)*. This algorithm is an inversion free variant of the Newton method in Algorithm ; the basic idea is to replace the computation of in Algorithm 1 by the Schulz iteration for as described in [13]
where denotes the infinity norm of the matrix .

*Algorithm 2(b)*. This algorithm is an inversion free variant of Algorithm , derived using the Schulz iteration idea

*Algorithm 3(b)*. This algorithm is an inversion free variant of Algorithm , derived using the Schulz iteration idea

*Algorithm 4(b)*. This algorithm is an inversion free variant of Algorithm , derived using the Schulz iteration idea

*Algorithm 5(b)*. This algorithm is an inversion free variant of Algorithm , derived using the Schulz iteration idea

*Algorithm 6(b)*. This algorithm is an inversion free variant of Algorithm , derived using another approach to eliminate the matrix inversion requirement
The convergence criterion depends on the quantity
It is easy to prove that
Hence, if , then ; thus there exists that can be calculated off-line.

Finally, we observe that Hence, if all the eigenvalues of lie inside the unit circle, then the quantity tends to zero as tends to infinite. Furthermore, if the eigenvalues of lie outside the unit circle, then we are able use the algorithm to calculate the principal square root of , from where we are able to find the principal square root of (we need only two matrix inversions).

*Algorithm 7(b)*. This algorithm is an inversion free variant of Algorithm , derived using another approach to eliminate the matrix inversion requirement
Note that the following variant uses another convergence criterion
The convergence criterion depends on the quantity
It is easy to prove that
Hence, if , then ; thus there exists that can be calculated off-line.

Finally, we observe that Hence, if all the eigenvalues of lie inside the unit circle, then the quantity tends to zero as tends to infinite. Furthermore, if the eigenvalues of lie outside the unit circle, then we are able use the algorithm to calculate the principal square root of , from where we are able to find the principal square root of (we need only two matrix inversions).

*Algorithm 8(b)*. This algorithm is an inversion free variant of Algorithm , derived using the Schulz iteration idea:
All the inversion use and inversion free iterative algorithms are summarized in Table 1.

*Remark 2. * All the algorithms presented in Section 3 are inversion free algorithms; they do not require matrix inversion at every iteration. The elimination of the matrix inversion requirement can be achieved using the Schulz iteration.

Algorithms 6(b) and 7(b) use another technique to eliminate the matrix inversion requirement. These two algorithms require only one matrix inversion, after the convergence of the algorithms in order to compute the final result. It is obvious that this matrix inversion can be substituted by the Schulz iteration.

Algorithm uses only one inversion (). It is obvious that this matrix inversion can be substituted by the Schulz iteration.

Note that all the proposed inversion free iterative algorithms compute the principal square root of an dimensional matrix with complexity of the order , due to the fact that the proposed inversion algorithms require matrix additions and matrix multiplications. The algorithms avoid possible problems in matrix inversion (and even at every iteration).

#### 4. Algebraic Method

In this section an algebraic method for computing the principal square root of a matrix is presented. The method is associated with the solution of a special case of the continuous time Riccati equation which arises in linear estimation [9] and requires only one matrix inversion (this is not an iterative method).

We are able to solve the Riccati equation (7) using the algebraic method proposed in [14–17]. In fact the following matrix is formed: Then, can be written as where is a block-diagonal matrix containing the eigenvalues of , with diagonal matrix with all the eigenvalues of lying the right half-plane eigenvalues of matrix and is the matrix containing the corresponding eigenvectors of .

Then, the solution of the Riccati equation is given in terms of the eigenvalues of matrix and formulated The method requires only one matrix inversion in order to compute the final result (this is not an iterative method). It is obvious that this matrix inversion can be substituted by the Schulz iteration.

We are also able to compute all the square roots (not only the principal square root) of a matrix, using the ideas in [14–16].

#### 5. Simulation Results

Simulation results are given to illustrate the efficiency of the proposed methods. The proposed algorithms compute accurate solutions as verified trough the following simulation examples.

All the algorithms are considered inversion free iterative algorithms, when Moore-Penrose matrix inversion is applied.

*Example 3. *Consider the 2 × 2 symmetric and positive definite matrix
All the inversion use and inversion free iterative algorithms have been applied with convergence criterion and compute the same principal square root of :
The proposed algebraic method has also been applied and computes the accurate principal square root of .

*Example 4. *Consider the matrix
All the iterative algorithms have been applied and compute the same principal square root of :
Algorithm 7(b) does not converge to the principal square root of . But, the algorithm has been used to calculate the principal square root of , from where we are able to find the principal square root of (we need only two matrix inversions).

The proposed algebraic method has also been applied and computes the accurate principal square root of .

*Example 5. *This example is taken from [2]. Consider the matrix
All the iterative algorithms have been applied and compute the same principal square root of :
Algorithm is not stable: it converges to the right value of the principal square root of after 7 iterations, but after 144 iterations it becomes to diverge and then it converges after 161 iterations to a value, which is not the principal square root of .

The proposed algebraic method has also been applied and computes the accurate principal square root of .

*Example 6. *Consider the matrix
All the iterative algorithms have been applied and compute the same principal square root of :
Algorithms , 6(b), 7(a), and 7(b) are not stable.

The proposed algebraic method has also been applied and computes the accurate principal square root of .

*Example 7. *Consider the matrix
All the iterative algorithms have been applied and compute the same principal square root of :Algorithm does not converge to the principal square root of . But, it has been used to calculate the principal square root of , from where we are able to find the principal square root of (we need only two matrix inversions).

The proposed algebraic method has also been applied and computes the accurate principal square root of.

We are able to conclude that Algorithms 6 and 7 are not always stable. Note that if any algorithm is not stable we are able to try to compute the principal square root of , from where we are able to find the principal square root of (we need only two matrix inversions). Finally, the proposed algebraic method has also been applied and computes the accurate principal square root of .

#### 6. Conclusions

In this paper, a survey of classical iterative algorithms for computing the principal square root of a matrix is presented; these algorithms require matrix inversion at every iteration. New algorithms for computing the principal square root of a matrix are proposed. The novelty of this work is the derivation of inversion free algorithms. The elimination of the matrix inversion requirement can be achieved by using the Schulz iteration. The proposed algorithms are derived by using the Newton method or by applying the Bernoulli substitution (in a special case of the continuous time Riccati equation) or by using the doubling principle. An inversion free algebraic algorithm is also derived.

Note that, among the proposed algorithms, the one per step iterative algorithm, the one doubling iterative algorithm and the algebraic algorithm, are associated with the solution of a special case of the continuous time Riccati equation, which arises in linear estimation. The relations between these algorithms and the classical Newton method are established.

All the proposed inversion free iterative algorithms compute the principal square root of an matrix achieving complexity, due to the fact that the proposed inversion algorithms require matrix additions and matrix multiplications. The algorithms avoid matrix inversion. This means that they avoid possible problems in matrix inversion (at every iteration) and that they are easily programmable.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### References

- D. A. Bini, N. J. Higham, and B. Meini, “Algorithms for the matrix $p$th root,”
*Numerical Algorithms*, vol. 39, no. 4, pp. 349–378, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C.-H. Guo and N. J. Higham, “A Schur-Newton method for the matrix $p$th root and its inverse,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 28, no. 3, pp. 788–804, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. J. Higham, “Newton's method for the matrix square root,”
*Mathematics of Computation*, vol. 46, no. 174, pp. 537–549, 1986. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - P. Laasonen, “On the iterative solution of the matrix equation $A{X}^{2}-I=0$,”
*Mathematical Tables and Other Aids to Computation*, vol. 12, pp. 109–116, 1958. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. G. Lainiotis, N. D. Assimakis, and S. K. Katsikas, “Fast and stable algorithm for computing the principal square root of a complex matrix,”
*Neural, Parallel & Scientific Computations*, vol. 1, no. 4, pp. 467–476, 1993. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. S. Shieh, S. R. Lian, and B. C. Mcinnis, “Fast and stable algorithms for computing the principal square root of a complex matrix,”
*IEEE Transactions on Automatic Control*, vol. 32, no. 9, pp. 820–822, 1987. View at Google Scholar - J. S. H. Tsai, L. S. Shieh, and R. E. Yates, “Fast and stable algorithms for computing the principal $n$th root of a complex matrix and the matrix sector function,”
*Computers & Mathematics with Applications*, vol. 15, no. 11, pp. 903–913, 1988. View at Publisher · View at Google Scholar · View at MathSciNet - E. Deadman, N. J. Higham, and R. Ralha, “Blocked schur algorithms for computing the matrix square root,” in
*Proceedings of the 11th International Conference on Applied Parallel and Scientific Computing (PARA '12)*, vol. 7782 of*Lecture Notes in Computer Science*, pp. 171–182, Springer, 2013. View at Google Scholar - R. E. Kalman and R. S. Bucy, “New results in linear filtering and prediction theory,”
*Journal of Basic Engineering, Transactions of the ASME D*, vol. 83, pp. 95–107, 1961. View at Google Scholar - B. Yuttanan and C. Nilrat, “Roots of Matrices,”
*Songklanakarin Journal of Science and Technology*, vol. 27, no. 3, pp. 659–665, 2005. View at Google Scholar - C. S. Kenney and R. B. Leipnik, “Numerical integration of the differential matrix Riccati equation,”
*IEEE Transactions on Automatic Control*, vol. 30, no. 10, pp. 962–970, 1985. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - D. G. Lainiotis, “Partitioned Riccati solutions and integration-free doubling algorithms,”
*IEEE Transactions on Automatic Control*, vol. 21, no. 5, pp. 677–689, 1976. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Zhan, “Computing the extremal positive definite solutions of a matrix equation,”
*SIAM Journal on Scientific Computing*, vol. 17, no. 5, pp. 1167–1174, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. Adam and N. Assimakis,
*Matrix Equations Solutions Using Riccati Equation Theory and Applications*, LAMPERT Academic Publishing, 2012. - M. Adam, N. Assimakis, G. Tziallas, and F. Sanida, “Riccati equation solution method for the computation of the solutions of $X+{A}^{T}{X}^{-1}A=Q$ and $X-{A}^{T}{X}^{-1}A=Q$,”
*The Open Applied Informatics Journal*, vol. 3, pp. 22–33, 2009. View at Google Scholar - N. Assimakis and M. Adam, “Iterative and algebraic algorithms for the computation of the steady state Kalman filter gain,”
*ISRN Applied Mathematics*, vol. 2014, Article ID 417623, 10 pages, 2014. View at Publisher · View at Google Scholar - D. R. Vaughan, “A nonrecursive algebraic solution for the discrete time Riccati equation,”
*IEEE Transactions on Automatic Control*, vol. 15, no. 5, pp. 597–599, 1970. View at Google Scholar