Abstract

We study the low-degree solution of the Sylvester matrix equation , where and are regular. Using the substitution of parameter variables , we assume that the matrices and are invertible. Thus, we prove that if the equation is solvable, then it has a low-degree solution , satisfying the degree conditions .

1. Introduction

Dealing with the problems about regulation output in control theory leads to a generalized Sylvester matrix equation:where the matrices involved are the matrix polynomials (e.g., [17]). There have been an extensive study and application of the generalized Sylvester matrix equation (e.g., [811]). This work investigates the bound of the low-degree solution of equation (1) with degree 1 matrix polynomials and .

We adopt the following terminology [12]. Let with , and we denote the degree of matrix polynomial by . If , we set the degree . A matrix polynomial is called regular if and monic if is an identity matrix.

Wimmer used Jordan chains of polynomial to characterize the solvability of the generalized Sylvester equation in [13]. This condition about solvability extends results of Kučera [14] and Gohberg and Lerer [15]. In [16], Barnett studied the existence and uniqueness of the low-degree solutions. For monic matrices , and , he proved that equation (1) has a unique solution satisfying if and only if the determinants of and are coprime. Feinstein and Bar-Ness [17] extended this result to the case with only or (not necessarily both) being regular. The solvability of matrix equation (1) was also studied in [18, 19].

It is well known that there are polynomials and , such thatand , where is the monic greatest common factor of . When we consider the case in Sylvester matrix polynomial equation (1), it is shown that for monic matrix polynomials and , if equation (1) has solutions, then it has a solution satisfying . However, the remark in [20] shows that the proposition is false when and are not monic matrix polynomials. As equation (1) with regular matrix polynomials and have not been developed fully about the degree, we will investigate the low-degree solution ofwhere and are regular. We use the index of matrix to characterize the bound of low-degree solution.

2. Preliminary

To prove Theorem 2, we first recall the division of matrix polynomials [12]. We restrict ourselves to the case when the dividend is a general matrix polynomial:and the divisor is a monic matrix polynomial:

In this case, we have the following representation:where is a matrix polynomial, which is called the right quotient, and is a matrix polynomial satisfying . The matrix polynomial is called the right remainder on division of by . Similarly, we havewhere is the left quotient, and is the left remainder.

Lemma 1 (Lemma 3.1 in [20]). Suppose that matrix polynomials and are monic, . If the matrix polynomial equation,is solvable, then it has a solution satisfying , .

The following definitions about the Drazin inverse can be found in [21].

Definition 1. The smallest positive integer for whichholds is called the index of and denoted by Ind .

Definition 2. Let be a matrix, . If satisfies equationsthen is called the Drazin inverse of and denoted by .

3. Main Result

We start with the following theorem about the special Sylvester matrix equation.

Theorem 1. Suppose , . Then, the equationhas a solution if and only if the equationhas a solution.

Proof. Suppose equation (12) holds. Multiplying on the right side of equation (12), we haveThis means equation (11) holds.
On the other hand, by the property of the Drazin inverse of matrix, there exists a matrix satisfyingMultiplying on the right side of equation (11), we haveBy equation (14), we have the following equation:Thus, there exists a matrix that satisfiesThe proof is completed.

With the help of Lemma 1 and Theorem 1, we can now prove the main result in this study.

Theorem 2. Suppose are invertible matrices. If the Sylvester matrix equationhas solutions, then it has low-degree solution satisfying

Proof. Suppose that the Sylvester matrix equationhas a solution , whereThen,Without loss of generality, we may assume that . Multiplying both sides of equation (22) by , left side of equation (22) by , and right side of equation (22) by , we havewhere , andBy the division of matrix polynomials, we understand a representation in the formwhere .
By substituting expression (25) into equation (23), it can be represented asBy Lemma 1, equation (26) being solved is equivalent to there exists satisfyingi.e.,By comparing the coefficients of on both sides of equation (28), we have . Then, replacing by , equation (27) can be reduced toBy Theorem 1, there exists a matrix , such thatwhere .
LetThen, we haveSimilarly, equation (18) has a solution satisfyingThis completes the proof of Theorem 2.

We use the above theorem to calculate an example. This example also shows that the degree bound in Theorem 1 is the lowest one.

Example 1. Consider equationwhere .
By Theorem 2 andwe assume and , i.e.,If we plug (36) into (34) and solve the equation, we obtain a solutionActually, there is no matrix polynomial with degree 0 satisfying equation (34).

Data Availability

No data, models, or codes were generated or used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Shandong Province, China (ZR2018PA002).