Abstract

Two new algorithms are proposed to compute the nonsingular square root of a matrix A. Convergence theorems and stability analysis for these new algorithms are given. Numerical results show that these new algorithms are feasible and effective.

1. Introduction

Consider the following nonlinear matrix equation: where is an nonsingular complex matrix. A solution of (1) is called a square root of . The matrix square roots have many applications in the boundary value problems [1] and the computation of the matrix logarithm [2, 3].

In the last few years there has been a constantly increasing interest in developing the theory and numerical methods for the matrix square roots. The existence and uniqueness of the matrix square root can be found in [46]. Here, it is worthwhile to point out that any nonsingular matrix has a square root, and the square root is also nonsingular [4]. A number of methods have been proposed for computing square root of a matrix [5, 716]. The computational methods for the matrix square root can be generally separated into two classes. The first class is the so-called direct methods, for example, Schur algorithm developed by Björck and Hammarling [7]. The second class is the iterative methods. Matrix iterations , where is a polynomial or a ration function, are attractive alternatives for computing square roots [9, 1113, 15, 17]. A well-known iterative method for computing matrix square root is Newton’s method. It has nice numerical behavior, for example, quadratic convergence. Newton’s method for solving (1) was proposed in [18]. Later, some simplified Newton’s methods were developed in [11, 19, 20]. Unfortunately, these simplified Newton’s methods have poor numerical stability.

In this paper, we propose two new algorithms to compute the nonsingular square root of a matrix, which have good numerical stability. We first apply Samanskill technique, especially, proposed in [21] to compute the matrix square root. Convergence theorems and stability analysis for these new algorithms are given in Sections 3 and 4. In Section 5, we use some numerical examples to show that these new algorithms are more effective than the known ones in some aspects. And the final conclusions are given in Section 6.

2. Two New Algorithms

In order to compute the square root of matrix , a natural approach is to apply Newton’s method to (1), and this can be stated as follows.

Algorithm 1 (see [11, 19] (Newton’s method for (1))). We consider the following.
Step  0. Given and , set .
Step  1. Let . If , stop.
Step  2. Solve for in Sylvester equation:
Step  3. Update , , and go to Step 1.

Applying the standard local convergence theorem to Algorithm 1 [19, P. 148], we deduce that the sequence generated by Algorithm 1 converges quadratically to a square root of if the starting matrix is sufficiently close to .

In this paper, we propose two new algorithms to compute the nonsingular square root of the matrix . Our idea can be stated as follows. If (1) has a nonsingular solution , then we can transform (1) into an equivalent nonlinear matrix equation: Then we apply Newton’s method to (3) for computing the nonsingular square root of .

By the definition of F-differentiable and some simple calculations, we obtain that if the matrix is nonsingular, then the mapping is F-differentiable at and Thus Newton’s method for (3) can be written as Combining (4), the iteration (5) is equivalent to the following.

Algorithm 2 (Newton’s method for (3)). We consider the following.
Step  0. Given and , set .
Step  1. Let . If , stop.
Step  2. Solve for in generalized Sylvester equation:
Step  3. Update , , and go to Step 1, where .

By using Samanskii technique [21] to Newton’s method (5), we get the following algorithm.

Algorithm 3 (Newton’s method for (3) with Samanskii technique). We consider the following.
Step  0. Given , , and , set .
Step  1. Let . If , stop.
Step  2. Let .
Step  3. If , go to Step 6.
Step  4. Solve for in generalized Sylvester equation:
Step  5. Update , , and go to Step 3.
Step  6. Update , , and go to Step 1.

Remark 4. In this paper, we only consider the case that . If , then Algorithm 3 is Algorithm 2.

Remark 5. Iteration (5) is more suitable for theoretical analysis such as the convergence theorems and stability analysis in Sections 3 and 4, while Algorithms 2 and 3 are more convenient for numerical computation in Section 5. In actual computations, the Sylvester equation may be solved by the algorithms developed in [22].

Although Algorithms 2 and 3 are also Newton’s methods, Algorithms 2 and 3 are more effective than Algorithm 1. Algorithm 3, especially, with has cubic convergence rate.

3. Convergence Theorems

In this section, we establish local convergence theorems for Algorithms 2 and 3. We begin with some lemmas.

Lemma 6 (see [23, P. 21]). Let be an (nonlinear) operator from a Banach space into itself and let be a solution of . If is Frechet differentiable at with , then the iteration , , converges to , provided that is sufficiently close to .

Lemma 7 (see [17, P. 45]). Let and assume that is invertible with . If and , then is also invertible, and

Lemma 8. If the matrix is nonsingular, then there exist and such that, for all , it holds that where and , are the F-derivative of the mapping defined by (4) at , .

Proof. Let , and we select .
From Lemma 7 it follows that is nonsingular and for each . Then is well defined, and so does , where . According to (4), we have where .
Hence, we have

Theorem 9. If (3) has a nonsingular solution and the mapping is invertible, then there exists a close ball , such that, for all , the sequence generated by Algorithm 2 converges at least quadratically to the solution .

Proof. Let . By Taylor formula in Banach space [24, P. 67], we have
Hence, the F-derivative of at is 0. By Lemma 6, we derive that the sequence generated by the iteration (5) converges to . It is also obtained that the sequence generated by Algorithm 2 converges to .
Let , according to and Lemma 7; for large enough , we have By Lemma 8, we have
By making use of Taylor formula once again, for all , we have Hence, Combining (13)–(16), we have which implies that the sequence generated by Algorithm 2 converges at least quadratically to the solution .

Theorem 10. If (1) has a nonsingular solution and the mapping is invertible, then there exists a close ball , such that, for all , the sequence generated by Algorithm 3 converges at least cubically to the solution .

Proof. Let . By Taylor formula in Banach space [24, P. 67], we have
Hence, the F-derivative of at is 0. By Lemma 6, we derive that the sequence generated by iteration (5) converges to . It is also obtained that the sequence generated by Algorithm 3 converges to .
Let , according to and Lemma 7; for large enough , we have By Lemma 8, we have
By making use of Taylor formula once again, for all , we have Hence, Combining (19)–(22) and Theorem 9, we have where . Therefore, the sequence generated by Algorithm 3 converges at least cubically to the solution .

4. Stability Analysis

In accordance with [2] we define an iteration to be stable in a neighborhood of a solution , if the error matrix satisfies where is a linear operator that has bounded power; that is, there exists a constant such that, for all and arbitrary of unit norm, . This means that a small perturbation introduced in a certain step will not be amplified in the subsequent iterations.

Note that this definition of stability is an asymptotic property and is different from the usual concept of numerical stability, which concerns the global error propagation, aiming to bound the minimum relative error over the computed iterates.

Now we consider the iteration (5) and define the error matrix ; that is, For the sake of simplicity, we perform a first order error analysis; that is, we omit all the terms that are quadratic in the errors. Equality up to second order terms is denoted with the symbol .

Substituting (25) into (5) we get combining (4) we have which implies that Omitting all terms that are quadratic in the errors, we have By using , we have that is, which means that iteration (5) is self-adaptive; that is to say, the error in the th iteration does not propagate to the ()st iteration. When and have no eigenvalue in common, especially, the matrix equation has a unique solution [17, P. 194]. Therefore, under the condition that and have no eigenvalue in common, the iteration (5) has optimal stability; that is, the operator defined in (24) coincides with the null operator.

5. Numerical Examples

In this section, we compare our algorithms with the following.

Algorithm 11 (the Denman-Beavers iteration [9]). Consider

Algorithm 12 (the scaled Denman-Beavers iteration [13]). Consider

Algorithm 13 (the Pade iteration [13]). Consider where is a chosen integer:

Algorithm 14 (the scaled Pade iteration [13]). Consider

All tests are performed by using MATLAB 7.1 on a personal computer (Pentium IV/2.4 G), with machine precision . The stopping criterion for these algorithms is the relative residual error: where is the current, say the th, iteration value.

Example 1. Consider the matrix We use Algorithms 1, 2, and 3 with and Algorithms 1114 to compute the nonsingular square root of . We list the iteration steps (denoted by “IT”), CPU time (denoted by “CPU”), and the relative residual error (denoted by “ERR”) in Table 1.

Example 2. Consider the matrix We use Algorithms 1, 2, and 3 with the starting matrix and Algorithms 1114 to compute the nonsingular square root of . We list the numerical results in Table 2.

From Tables 1 and 2, we can see that Algorithms 2 and 3 outperform Algorithms 1, 11, 12, and 13 in both iteration steps and approximation accuracy, and Algorithm 3 outperforms Algorithms 1, 2, and 1114 in both iteration steps and approximation accuracy. Therefore, our algorithms are more effective than the known ones in some aspects.

6. Conclusion

In this paper, we propose two new algorithms for computing the nonsingular square root of a matrix by applying Newton’s method to nonlinear matrix equation . Convergence theorems and stability analysis for these new algorithms are given. Numerical examples show that our methods are more effective than the known one in some aspects.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors wish to thank the editor and anonymous referees for providing very useful suggestions as well as Professor Xuefeng Duan for his insightful and beneficial discussion and suggestions. This work was supported by National Natural Science Fund of China (nos. 11101100, 11301107, and 11261014) and Guangxi Provincial Natural Science Foundation (nos. 2012GXNSFBA053006, 2013GXNSFBA019009).