Abstract
We give a sufficient condition (the solvability of two standard equations) of Sylvester matrix by using the displacement structure of the Sylvester matrix, and, according to the sufficient condition, we derive a new fast algorithm for the inversion of a Sylvester matrix, which can be denoted as a sum of products of two triangular Toeplitz matrices. The stability of the inversion formula for a Sylvester matrix is also considered. The Sylvester matrix is numerically forward stable if it is nonsingular and well conditioned.
1. Introduction
Let be the space of polynomials over the real numbers. Given univariate polynomials , , where
Let denote the Sylvester matrix of and
Sylvester matrix is applied in many science and technology fields. The solutions of Sylvester matrix equations and matrix inequations play an important role in the analysis and design of control systems. In determining the greatest common divisor of two polynomials, the Sylvester matrix plays a vital role, and the magnitude of the inverse of the Sylvester matrix is important in determining the distance to the closest polynomials which have a common root. Assuming that all principal submatrices of the matrix are nonsingular, in [1], Jing Yang et al. have given a fast algorithm for the inverse of Sylvester matrix by using the displacement structure of -order Sylvester matrix.
By using the displacement structure of the Sylvester matrix, in this paper, we give a sufficient condition (the solvability of two standard equations) of Toeplitz matrix, and, according to the sufficient condition, we derive a new fast algorithm for the inversion of a Sylvester matrix, which can be denoted as a sum of products of two triangular Toeplitz matrices. At last, the stability of the inversion formula for a Sylvester matrix is also considered. The Sylvester matrix is numerically forward stable if it is nonsingular and well conditioned.
In this paper, always denotes the Euclidean or spectral norm and the Frobenius norm.
2. Preliminary Notes
In this section, we present a lemma that is important to our main results.
Lemma 2.1 (see [2, Section ]). Let and . Then for any floating-point arithmetic with machine precision , one has that As usual, one neglects the errors of , , and .
3. Sylvester Inversion Formula
In this section, we present our main results.
Theorem 3.1. Let matrix be a Sylvester matrix; then it satisfies the formula where is a shift matrix,
Proof. We have that So .
Theorem 3.2. Let matrix be a Sylvester matrix and , , , and the solutions of the systems of equations , , , and , respectively, where and are both vectors; then (a) is invertible, and the column vector of satisfies the recurrence relation(b)
Proof. By Theorem 3.1 and , we have that
so
Hence, we have that
Let
It is easy to see that , and by (3.8)
From , we have that . Let ; then
so the matrix is invertible and the inverse of is the matrix .
From (3.1), we have that
and thus
So
Since , we have that
and hence
For (b), by (3.4)
So
This completes the proof.
4. Stability Analysis
In this section, we will show that the Sylvester inversion formula presented in this paper is evaluation forward stable.
If, for all well conditioned problems, the computed solution is close to the true solution , in the sense that the relative error is small, then we call the algorithm forward stable (the author called this weakly in [3]). Round-off errors will occur in the matrix computation.
Theorem 4.1. Let matrix be a nonsingular Sylvester matrix and well conditioned; then the formula in Theorem 3.2 is forward stable.
Proof. Assume that we have computed the solutions , and of , , , and in Theorem 3.2 which are perturbed by the normwise relative errors bounded by ,
Thus, we have that
The inversion formula in Theorem 3.2, using the perturbed solutions , , , and , can be expressed as
Here, and in the sequel, is the matrix containing the error which results from computing the matrix products and contains the error from subtracting the matrices. For the error matrices , , , and , we have that
By Lemma 2.1, we have the following bounds on and :
Consequently, adding all these error bounds, by (4.3), we have that
From the equations , , and in Theorem 3.2, we have that
Thus, the relative error is
Since is well conditioned, is finite. It is easy to see that are finite. Therefore, the formula presented in Theorem 3.2 is forward stable.
This completes the proof.
5. Numerical Example
This section gives an example to illustrate our results. All the following tests are performed by MATLAB 7.0.
Example 5.1. Given , that is, ,,,,,,,,, and .
So
Therefore, .
By the condition of Theorem 3.2, we can get
Obviously, is invertible and And it is easy to see that ,
Acknowledgments
This work was supported financially by the Youth Foundation of Shandong Province, China (ZR2010EQ014), Shandong Province Natural Science Foundation (ZR2010AQ026), and National Natural Science Foundation of China (11026047).