/ / Article
Special Issue

## Numerical Methods of Complex Valued Linear Algebraic System

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 267042 | https://doi.org/10.1155/2014/267042

Chun-Mei Li, Shu-Qian Shen, "Newton’s Method for the Matrix Nonsingular Square Root", Journal of Applied Mathematics, vol. 2014, Article ID 267042, 7 pages, 2014. https://doi.org/10.1155/2014/267042

# Newton’s Method for the Matrix Nonsingular Square Root

Accepted10 Aug 2014
Published31 Aug 2014

#### Abstract

Two new algorithms are proposed to compute the nonsingular square root of a matrix A. Convergence theorems and stability analysis for these new algorithms are given. Numerical results show that these new algorithms are feasible and effective.

#### 1. Introduction

Consider the following nonlinear matrix equation: where is an nonsingular complex matrix. A solution of (1) is called a square root of . The matrix square roots have many applications in the boundary value problems [1] and the computation of the matrix logarithm [2, 3].

In the last few years there has been a constantly increasing interest in developing the theory and numerical methods for the matrix square roots. The existence and uniqueness of the matrix square root can be found in [46]. Here, it is worthwhile to point out that any nonsingular matrix has a square root, and the square root is also nonsingular [4]. A number of methods have been proposed for computing square root of a matrix [5, 716]. The computational methods for the matrix square root can be generally separated into two classes. The first class is the so-called direct methods, for example, Schur algorithm developed by Björck and Hammarling [7]. The second class is the iterative methods. Matrix iterations , where is a polynomial or a ration function, are attractive alternatives for computing square roots [9, 1113, 15, 17]. A well-known iterative method for computing matrix square root is Newton’s method. It has nice numerical behavior, for example, quadratic convergence. Newton’s method for solving (1) was proposed in [18]. Later, some simplified Newton’s methods were developed in [11, 19, 20]. Unfortunately, these simplified Newton’s methods have poor numerical stability.

In this paper, we propose two new algorithms to compute the nonsingular square root of a matrix, which have good numerical stability. We first apply Samanskill technique, especially, proposed in [21] to compute the matrix square root. Convergence theorems and stability analysis for these new algorithms are given in Sections 3 and 4. In Section 5, we use some numerical examples to show that these new algorithms are more effective than the known ones in some aspects. And the final conclusions are given in Section 6.

#### 2. Two New Algorithms

In order to compute the square root of matrix , a natural approach is to apply Newton’s method to (1), and this can be stated as follows.

Algorithm 1 (see [11, 19] (Newton’s method for (1))). We consider the following.
Step  0. Given and , set .
Step  1. Let . If , stop.
Step  2. Solve for in Sylvester equation:
Step  3. Update , , and go to Step 1.

Applying the standard local convergence theorem to Algorithm 1 [19, P. 148], we deduce that the sequence generated by Algorithm 1 converges quadratically to a square root of if the starting matrix is sufficiently close to .

In this paper, we propose two new algorithms to compute the nonsingular square root of the matrix . Our idea can be stated as follows. If (1) has a nonsingular solution , then we can transform (1) into an equivalent nonlinear matrix equation: Then we apply Newton’s method to (3) for computing the nonsingular square root of .

By the definition of F-differentiable and some simple calculations, we obtain that if the matrix is nonsingular, then the mapping is F-differentiable at and Thus Newton’s method for (3) can be written as Combining (4), the iteration (5) is equivalent to the following.

Algorithm 2 (Newton’s method for (3)). We consider the following.
Step  0. Given and , set .
Step  1. Let . If , stop.
Step  2. Solve for in generalized Sylvester equation:
Step  3. Update , , and go to Step 1, where .

By using Samanskii technique [21] to Newton’s method (5), we get the following algorithm.

Algorithm 3 (Newton’s method for (3) with Samanskii technique). We consider the following.
Step  0. Given , , and , set .
Step  1. Let . If , stop.
Step  2. Let .
Step  3. If , go to Step 6.
Step  4. Solve for in generalized Sylvester equation:
Step  5. Update , , and go to Step 3.
Step  6. Update , , and go to Step 1.

Remark 4. In this paper, we only consider the case that . If , then Algorithm 3 is Algorithm 2.

Remark 5. Iteration (5) is more suitable for theoretical analysis such as the convergence theorems and stability analysis in Sections 3 and 4, while Algorithms 2 and 3 are more convenient for numerical computation in Section 5. In actual computations, the Sylvester equation may be solved by the algorithms developed in [22].

Although Algorithms 2 and 3 are also Newton’s methods, Algorithms 2 and 3 are more effective than Algorithm 1. Algorithm 3, especially, with has cubic convergence rate.

#### 3. Convergence Theorems

In this section, we establish local convergence theorems for Algorithms 2 and 3. We begin with some lemmas.

Lemma 6 (see [23, P. 21]). Let be an (nonlinear) operator from a Banach space into itself and let be a solution of . If is Frechet differentiable at with , then the iteration , , converges to , provided that is sufficiently close to .

Lemma 7 (see [17, P. 45]). Let and assume that is invertible with . If and , then is also invertible, and

Lemma 8. If the matrix is nonsingular, then there exist and such that, for all , it holds that where and , are the F-derivative of the mapping defined by (4) at , .

Proof. Let , and we select .
From Lemma 7 it follows that is nonsingular and for each . Then is well defined, and so does , where . According to (4), we have where .
Hence, we have

Theorem 9. If (3) has a nonsingular solution and the mapping is invertible, then there exists a close ball , such that, for all , the sequence generated by Algorithm 2 converges at least quadratically to the solution .

Proof. Let . By Taylor formula in Banach space [24, P. 67], we have
Hence, the F-derivative of at is 0. By Lemma 6, we derive that the sequence generated by the iteration (5) converges to . It is also obtained that the sequence generated by Algorithm 2 converges to .
Let , according to and Lemma 7; for large enough , we have By Lemma 8, we have
By making use of Taylor formula once again, for all , we have Hence, Combining (13)–(16), we have which implies that the sequence generated by Algorithm 2 converges at least quadratically to the solution .

Theorem 10. If (1) has a nonsingular solution and the mapping is invertible, then there exists a close ball , such that, for all , the sequence generated by Algorithm 3 converges at least cubically to the solution .

Proof. Let . By Taylor formula in Banach space [24, P. 67], we have
Hence, the F-derivative of at is 0. By Lemma 6, we derive that the sequence generated by iteration (5) converges to . It is also obtained that the sequence generated by Algorithm 3 converges to .
Let , according to and Lemma 7; for large enough , we have By Lemma 8, we have
By making use of Taylor formula once again, for all , we have Hence, Combining (19)–(22) and Theorem 9, we have where . Therefore, the sequence generated by Algorithm 3 converges at least cubically to the solution .

#### 4. Stability Analysis

In accordance with [2] we define an iteration to be stable in a neighborhood of a solution , if the error matrix satisfies where is a linear operator that has bounded power; that is, there exists a constant such that, for all and arbitrary of unit norm, . This means that a small perturbation introduced in a certain step will not be amplified in the subsequent iterations.

Note that this definition of stability is an asymptotic property and is different from the usual concept of numerical stability, which concerns the global error propagation, aiming to bound the minimum relative error over the computed iterates.

Now we consider the iteration (5) and define the error matrix ; that is, For the sake of simplicity, we perform a first order error analysis; that is, we omit all the terms that are quadratic in the errors. Equality up to second order terms is denoted with the symbol .

Substituting (25) into (5) we get combining (4) we have which implies that Omitting all terms that are quadratic in the errors, we have By using , we have that is, which means that iteration (5) is self-adaptive; that is to say, the error in the th iteration does not propagate to the ()st iteration. When and have no eigenvalue in common, especially, the matrix equation has a unique solution [17, P. 194]. Therefore, under the condition that and have no eigenvalue in common, the iteration (5) has optimal stability; that is, the operator defined in (24) coincides with the null operator.

#### 5. Numerical Examples

In this section, we compare our algorithms with the following.

Algorithm 11 (the Denman-Beavers iteration [9]). Consider

Algorithm 12 (the scaled Denman-Beavers iteration [13]). Consider

Algorithm 13 (the Pade iteration [13]). Consider where is a chosen integer:

Algorithm 14 (the scaled Pade iteration [13]). Consider

All tests are performed by using MATLAB 7.1 on a personal computer (Pentium IV/2.4 G), with machine precision . The stopping criterion for these algorithms is the relative residual error: where is the current, say the th, iteration value.

Example 1. Consider the matrix We use Algorithms 1, 2, and 3 with and Algorithms 1114 to compute the nonsingular square root of . We list the iteration steps (denoted by “IT”), CPU time (denoted by “CPU”), and the relative residual error (denoted by “ERR”) in Table 1.

 IT CPU ERR Algorithm 1 7 0.0086 Algorithm 2 7 0.0080 Algorithm 3 5 0.0103 Algorithm 11 9 0.0172 Algorithm 12 6 0.0136 Algorithm 13 with 11 0.0094 Algorithm 13 with 9 0.0101 Algorithm 14 with 9 0.0127 Algorithm 14 with 6 0.0108

Example 2. Consider the matrix We use Algorithms 1, 2, and 3 with the starting matrix and Algorithms 1114 to compute the nonsingular square root of . We list the numerical results in Table 2.

 IT CPU ERR Algorithm 1 6 7.6310 Algorithm 2 6 8.7200 Algorithm 3 4 9.0258 Algorithm 11 8 13.2301 Algorithm 12 7 11.6758 Algorithm 13 with 10 8.8936 Algorithm 13 with 6 9.4387 Algorithm 14 with 9 10.3571 Algorithm 14 with 5 8.1043

From Tables 1 and 2, we can see that Algorithms 2 and 3 outperform Algorithms 1, 11, 12, and 13 in both iteration steps and approximation accuracy, and Algorithm 3 outperforms Algorithms 1, 2, and 1114 in both iteration steps and approximation accuracy. Therefore, our algorithms are more effective than the known ones in some aspects.

#### 6. Conclusion

In this paper, we propose two new algorithms for computing the nonsingular square root of a matrix by applying Newton’s method to nonlinear matrix equation . Convergence theorems and stability analysis for these new algorithms are given. Numerical examples show that our methods are more effective than the known one in some aspects.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors wish to thank the editor and anonymous referees for providing very useful suggestions as well as Professor Xuefeng Duan for his insightful and beneficial discussion and suggestions. This work was supported by National Natural Science Fund of China (nos. 11101100, 11301107, and 11261014) and Guangxi Provincial Natural Science Foundation (nos. 2012GXNSFBA053006, 2013GXNSFBA019009).

#### References

1. B. A. Schmitt, “On algebraic approximation for the matrix exponential in singularly perturbed bounded value problems,” SIAM Journal on Numerical Analysis, vol. 57, pp. 51–66, 2010. View at: Google Scholar
2. S. H. Cheng, N. J. Higham, C. S. Kenney, and A. J. Laub, “Approximating the logarithm of a matrix to specified accuracy,” SIAM Journal on Matrix Analysis and Applications, vol. 22, no. 4, pp. 1112–1125, 2001. View at: Publisher Site | Google Scholar | MathSciNet
3. L. Dieci, B. Morini, and A. Papini, “Computational techniques for real logarithms of matrices,” SIAM Journal on Matrix Analysis and Applications, vol. 17, no. 3, pp. 570–593, 1996.
4. G. W. Cross and P. Lancaster, “Square roots of complex matrices,” Linear and Multilinear Algebra, vol. 1, pp. 289–293, 1974. View at: Google Scholar | MathSciNet
5. W. D. Hoskins and D. J. Walton, “A faster method of computing square roots of a matrix,” IEEE Transactions on Automatic Control, vol. 23, no. 3, pp. 494–495, 1978. View at: Google Scholar
6. C. R. Johnson and K. Okubo, “Uniqueness of matrix square roots under a numerical range condition,” Linear Algebra and Its Applications, vol. 341, no. 1–3, pp. 195–199, 2002. View at: Publisher Site | Google Scholar | MathSciNet
7. Å. Björck and S. Hammarling, “A Schur method for the square root of a matrix,” Linear Algebra and Its Applications, vol. 52-53, pp. 127–140, 1983. View at: Publisher Site | Google Scholar | MathSciNet
8. S. G. Chen and P. Y. Hsieh, “Fast computation of the nth root,” Computers & Mathematics with Applications, vol. 17, no. 10, pp. 1423–1427, 1989. View at: Publisher Site | Google Scholar | MathSciNet
9. E. D. Denman and A. N. Beavers Jr., “The matrix sign function and computations in systems,” Applied Mathematics and Computation, vol. 2, no. 1, pp. 63–94, 1976. View at: Google Scholar | MathSciNet
10. L. P. Franca, “An algorithm to compute the square root of a $3×3$ positive definite matrix,” Computers & Mathematics with Applications, vol. 18, no. 5, pp. 459–466, 1989. View at: Publisher Site | Google Scholar | MathSciNet
11. N. J. Higham, “Newton’s method for the matrix square root,” Mathematics of Computation, vol. 46, no. 174, pp. 537–549, 1986. View at: Publisher Site | Google Scholar | MathSciNet
12. M. A. Hasan, “A power method for computing square roots of complex matrices,” Journal of Mathematical Analysis and Applications, vol. 213, no. 2, pp. 393–405, 1997.
13. N. J. Higham, “Stable iterations for the matrix square root,” Numerical Algorithms, vol. 15, no. 2, pp. 227–242, 1997. View at: Publisher Site | Google Scholar | MathSciNet
14. Z. Liu, Y. Zhang, and R. Ralha, “Computing the square roots of matrices with central symmetry,” Applied Mathematics and Computation, vol. 186, no. 1, pp. 715–726, 2007.
15. Y. N. Zhang, Y. W. Yang, B. H. Cai, and D. S. Guo, “Zhang neural network and its application to Newton iteration for matrix square root estimation,” Neural Computing and Applications, vol. 21, no. 3, pp. 453–460, 2012. View at: Publisher Site | Google Scholar
16. Z. Liu, H. Chen, and H. Cao, “The computation of the principal square roots of centrosymmetric H-matrices,” Applied Mathematics and Computation, vol. 175, no. 1, pp. 319–329, 2006. View at: Publisher Site | Google Scholar | MathSciNet
17. J. M. Ortega and W. C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, SIAM, Philadephia, Pa, USA, 2008. View at: MathSciNet
18. P. Laasonen, “On the iterative solution of the matrix equation ${AX}^{2}-I=0$,” Mathematical Tables and Other Aids to Computation, vol. 12, pp. 109–116, 1958. View at: Publisher Site | Google Scholar | MathSciNet
19. J. M. Ortega, Numerical Analysis, Academic Press, New York, NY, USA, 2nd edition, 1972. View at: MathSciNet
20. B. Meini, “The matrix square root from a new functional perspective: theoretical results and computational issues,” SIAM Journal on Matrix Analysis and Applications, vol. 26, no. 2, pp. 362–376, 2004. View at: Publisher Site | Google Scholar | MathSciNet
21. V. Samanskii, “On a modification of the Newton’s method,” Ukrainian Mathematical Journal, vol. 19, pp. 133–138, 1967. View at: Google Scholar
22. J. D. Gardiner, A. J. Laub, J. J. Amato, and C. B. Moler, “Solution of the Sylvester matrix equation ${AXB}^{T}+{CXD}^{T}=E$,” Association for Computing Machinery: Transactions on Mathematical Software, vol. 18, no. 2, pp. 223–231, 1992. View at: Publisher Site | Google Scholar | MathSciNet
23. M. A. Krasnoselskii, G. M. Vainikko, P. P. Zabreiko, Y. B. Rutitskii, and V. Y. Stetsenko, Approximate Solution of Operator Equations, Wolters-Noordhoff Publishing, Groningen, The Netherlands, 1972.
24. D. J. Guo, Nonlinear Functional Analysis, Shandong Science Press, Shandong, China, 2009.

Copyright © 2014 Chun-Mei Li and Shu-Qian Shen. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.