Abstract

For the selected class of polynomial matrices of order three with one characteristic root with respect to the transformation of semiscalar equivalence, special triangular forms are established. The theorems of their uniqueness are proved. This gives reason to consider such canonical forms.

1. Introduction

In [1], it is proved that the matrix of full rank by means of transformation:where and can be reduced to the lower triangular form with invariant factors on the principal diagonal. Subdiagonal elements in a matrix of this form are ambiguously defined. The matrices which are related by the transformation (1) are called semiscalarly equivalent [1]. In [2], the specified triangular form for polynomial matrices with one characteristic root is a little simplified. The resulting matrix of a simplified triangular form is called a reduced. In [2], the invariants of the reduced matrix are established. In particular, the invariance of the location of zero subdiagonal elements is proved. In [3], the reduced matrix, if there are some zero elements under its principal diagonal, by means of transformations of the form (1) (i.e., by means of semiscalarly equivalent transformations) is reduced to such matrices, which are uniquely defined. This gives grounds to consider the obtained matrices canonical for the selected class of matrices. This article introduces canonical forms for reduced matrices with all nonzero subdiagonal elements.

2. Previous Information

Here are some definitions and notations that will be used in this article, which are known from [2, 3]. This applies to the definitions of the younger degree, of the younger term, of the younger coefficient, -monomial, and-coefficient of the polynomial and others. For example, monomial and its degree are, respectively, a younger term and younger degree of polynomial , and is the younger coefficient of this polynomial. Monomial and its coefficient are, respectively, a -monomial and -coefficient of polynomial .

Let all the roots of the characteristic polynomial (= characteristic roots) of the matrix be equal to each other; that is, the matrix has only one (without taking into account the multiplicity) characteristic root. Without loss of generality, we assume that the only characteristic root is zero and the first invariant factor of matrix is equal to one. With such assumptions, it is proved in [2] that, by means of semiscalarly equivalent transformations, the matrix is reduced to the matrix of the formwhich satisfies the following conditions:(1), , , .(2), , if .(3) and -monomial in is absent if .(4)Younger coefficients in and are units.

The matrix of the form (2) with conditions (1)–(4) in [2] is called the reduced matrix. Next, we consider the situation where the last two invariant multipliers of the matrix do not coincide, that is, . The case was considered in [4]. The notation means that the matrices and are semiscalarly equivalent. It should be noted that the problem of classification with respect to semiscalar equivalence of matrices of the second order is solved in the article [5]. Thus, this article discusses other situations that differ from [4, 5]. In [2], it is proved that, in case , we can choose the left transformation matrix in the transition from to the reduced matrixof the lower triangular form. We will then apply semiscalarly equivalent transforms to the matrix to obtain a reduced matrix of the form (3) with predefined properties. Let us show that, by a given reduced matrix of form (2) and a matrixwe can find the matrix and the right transformation matrix so that . Using the method of uncertain coefficients for given elements and of matrices and , respectively, with congruencewe find , . We denote such elements by :

Here . We form the matrix and consider the congruencewith the unknown . Since the free member of a matrix polynomial is a unit matrix, we can use the method of uncertain coefficients to solve this congruence and find , . We can check that . In addition to the above, we also denote

By the above , and from the congruences (5) and (7) , we construct and a matrix of the form (3), respectively. We can be convinced of equality . This means that is reversible and its inverted matrix together with the matrix reduces to . If the matrix (4) in the transition from to has one of the following views: then we will say that transformations of type I, transformations of type II, or transformations of type III, respectively, are applied to the matrix . We shall use the following notation for matrices of form (2) and of form (3):

3. The Main Results

Theorem 1. Suppose that in the reduced matrix of the form (2), we have , , , , , , , . Then, the matrix is semiscalarly equivalent to the reduced matrix of the form (3), where elements satisfy one of the following conditions:(1)-monomial is absent in , if and .(2)- and -monomials are absent in if and .(3)If and , then in the first of polynomials , , which satisfies condition , -monomial is absent, and in the first of these polynomials, which satisfies condition , -monomial is absent.

The matrix is uniquely defined.

Proof. Existence.(1)If , then is the desired matrix. Otherwise, we denote by and , respectively, the lower coefficient and the -coefficient of the polynomial and apply to transformations of the type III. In the left transformation matrix (see (9)), we put . The elements , , of the obtained in this way matrix satisfy the congruence:First, we obtain from (11) and (12) that the lower terms in are identical with the lower terms in , respectively. Further note that the younger terms in and coincide, the lower degrees of the last two additions in the left-hand side (13) exceed , and inequality holds. Therefore, by comparing the -coefficients in both parts of (13), we obtain zero for such -coefficient in . So, there is a desired matrix.(2)If in matrix has , then everything is proven—this matrix is the desired one. Otherwise, we will apply to it the transformation mentioned in Section 1. To show the absence of the -monomial in , one must take into account that (see (11)).Therefore, in (13), we have . The remaining considerations are the same as in paragraph 1. In order to not introduce new notations, we further assume that there is no -monomial in the element of original matrix . If , then everything is proven—the matrix is the desired one. Otherwise to , we apply transformation of the type II. At the same time, in the left transformation matrix (see (9)), we put , where and are, respectively, the lower coefficient and -coefficient of the polynomial . The elements , , of the reduced matrix obtained in this way satisfy the congruence:It can be seen from (14) and (15) that the younger terms in are the same as the lower terms in , respectively (their coefficients are equal to one).Let us write (16) as follows:From (14), we have . Becauseand , then by comparing the -coefficients in both parts of (17), we find that contains no -monomial. And because , then in , as in , there is no -monomial.(3)Suppose that conditions and are satisfied in matrix .(1)If and , then all is proved—matrix is the desired one.(2)Let and . Since , then (as well as ) is invariant (see Proposition 6 [2]). We apply to the transformation specified in Section 1. As a result, we obtain the matrix in the form (3). Its elements satisfy the congruences (11)–(13). Since , then from (11), we have . It can be seen from (12) that . Now we can represent (13) asFrom the last congruence, we have . Therefore, is a reduced matrix. If we take into account , then by comparing the -coefficients at both times (19), we will conclude that, in , there is no -monomial. If , then everything is proved-matrix is the desired one. Otherwise, we take another step. In order not to introduce new notations, we will assume that element of matrix does not contain -monomial. We apply to transformations of the type I. In the left transformation matrix (see (9)), we put , where and are, respectively, the lower coefficient and the coefficient of polynomial . The elements , of the resulting matrix satisfy the congruence:From (20), we have that , and from (21), it follows . Then, from (22), we getwhere we can get . Comparing the coefficients in both parts of (23), we conclude that there is no the monomial of degree in polynomial . At the same time, in , as in , there is no -monomial.(3)Now suppose that, in matrix , we have and . Apply to transformations of the type I. In the left transformation matrix (see (9)), we put , where and are, respectively, the lower coefficient of the polynomial and the -coefficient of the polynomial . As a result, we obtain a matrix of the form (3) whose elements , satisfy the congruences (20)–(22). From (20), we have that and -monomial in is absent. From (21), , and from (22), we have . It also follows from (20)–(22) that the lower coefficients in and , coincide. That is, is a reduced matrix. If , then everything is already proven. Then, is the desired matrix. Otherwise, in order to not introduce new notations, we consider the -coefficient in the element of the matrix null. Denote by and , respectively, the lower polynomial coefficient of the polynomial and -coefficient of the polynomial . We perform over the matrix transformation of the type III. For this, we put in the left transformation matrix (see (9)). The elements of the resulting matrix satisfy the congruences (11)–(13). From (21), we obtain that , the lower coefficient of the polynomial is , and its - and -coefficients are zero. From (12), it is seen that the lower coefficient in , as in , is equal to . Therefore, the matrix has the necessary properties.(4)Let and . We can assume that the -coefficient in of matrix is zero. If this is not the case, then to , we will apply transformation of the type I described in Section 3. If , then to , we apply transformation of the type III described in Section 3. Then, the resulting matrix will be zero -coefficient and will remain zero -coefficient of the polynomial in position . If , then from the matrix by means of transformations of the type III referred to in item 1, we go to the redundant matrix , in which -monomial of polynomial is absent. Then, -factor in will also remain zero. This proves the first part of the theorem (existence).

3.1. Uniqueness of the Matrix in Theorem 1

(1)Suppose that, for the reducible matrices , of forms (2) and (3), condition 1 of theorem holds, and, in addition, we have . Then, the left transformative matrix in the equality can be chosen in the form (9) (see Corollary 1 and Remark 1 [2]) and elements and , , of these matrices satisfy the congruenceWe have . If , then from (24), follows. Otherwise, in (24), we have since -monomials in and are absent. In any case, .(2)Suppose that the reduced matrices , of the forms (2) and (3) satisfy condition 2 of theorems and . Then, in the left transformative matrix (4), in the transition from to , we have (see Corollary 1 and Remark 1 [2]) and the elements and of the matrices and satisfy the congruences:From (25), we can writeFrom (25), we have . It is easy to see thatIf , then from (26), we have ; hence, it follows .Since , then from (26), follows, whence . From (25), taking into account , we get from where . So, we have .If , then from (25), we get . If , then taking into account and from (25) and (26), we have , , and . Therefore, coincide. If , then from (26), we obtain . Hence, in this case, the matrices also coincide.(3)Suppose that the reduced matrices of the forms (2) and (3) satisfy condition 3 of theorems and . Then, for the elements of these matrices, we can write the congruences:

If , then , and from (28), we get . Then, (28) will take the form

Obviously, . If , then (29) implies since .

Then, from (28), we get since

If , then (29) implies . If, moreover, , then from (29), it yields and all is proved. If , then all the same from (28) and (29), we have and , respectively.

If , then from (28), we get . If in addition , then from (28), it follows also and all is proved. If , then , and again from (28), we go to (29). It follows from this that , if . And if , then immediately from (28) and (29), we have and , respectively. Theorem is proved.

Suppose that, in the reduced matrices of the forms (2) and (3), we have . Let us keep the notation given in theorem:

We define polynomials:

From the coefficients of each of the polynomials , , and , we form, respectively, columns , , and of height . In the first place, in these columns, we put -coefficients, and below in order of increasing degrees, we place the rest of their coefficients, up to degree inclusive. We denote by , , and , the columns of height , constructed from the coefficients of polynomials , , and , respectively. In the first place in each of these columns, we put -coefficients. Below we place the rest of their coefficients (including zero) up to the degree . Similarly, from the coefficients of polynomials , , , and , we form columns , , , and and height . Here, we also put in the first place -coefficients, and then, in the order of increasing degrees, we place all other coefficients. In the last places, there will be -coefficients. For , by the columns formed, we construct the matrices of the following form:

In complete analogy for , we construct matrices of the following form:

Obviously, in these matrices, each row consists of monomial coefficients of the same degrees.

Theorem 2. Let in the reduced matrix of the form (2), we have , , and , . Then, , where in the reduced matrix of the form (3), all elements are nonzero, polynomial does not contain -monomial if , and polynomial does not contain -monomial if .

In addition, one of the following conditions is true:(1)In , -monomial is absent, if and .(2)In , - and -monomials are absent, if and .(3)In , - and -monomials are absent, if and .(4)In the first column of the matrix (35), the coefficients of the polynomials are zero elements that correspond to the maximum system of the first linearly independent rows of the submatrix , if and .

The matrix is uniquely defined.

Proof. Existence. Let .
We apply to transformation of the type II with the left transformation matrix of the form (9). At the same time, we put , where is the younger coefficient and is the -coefficient in . The elements , , of the thus obtained reduced matrix satisfy the congruences (14)–(16). We write (16) in the formwhere . Comparing the -coefficients in both parts of the last congruence, we have that does not contain -monomial. We further assume that element of the matrix does not contain -monomial (if ).
Let . Denote by and , respectively, the junior and -coefficients of the polynomial . Apply to transformations of the type I with the left transformation matrix of the form (9), while putting .
The elements , , of the thus obtained reduced matrix satisfy the congruences (20)–(22) (with the one listed here ). From (21) and (22), we obtainIf we compare the -coefficients in both parts of the last congruence, we will conclude that does not contain -monomial:(1)Suppose that, in element of matrix , there is no monomial of degree , and in polynomial , there is no monomial of degree . Denote by and , respectively, the lower coefficient in and -coefficient in . With the help transformation of the type III, we pass from to the reduced matrix . In the left transformation matrix (see (9)), we put . The elements of the resulting matrix satisfy the congruences (11)–(13) (with the one specified here ). From (11), we get that, in element , -monomial is missing.We write (13) in the formwhere . Since , then, as seen from the last congruence, in , as in , there is no -monomial. Also in , as in , there is no -monomial. This is evident from the congruencewhich is recorded on the basis of (12) and (13) since . This proves the existence of matrix with condition (1) specified in theorem.(2)Suppose that conditions , , are satisfied in matrix , and -monomial is absent in polynomial . We denote by and , respectively, the lower coefficient in and the -coefficient in . Let us do over matrix transformation of the type II. To do this we put in the left transformation matrix (see (9)). We obtain a reduced matrix whose elements satisfy the congruences of the form (14)–(16) (with indicated here). Taking into account that the lower coefficients in and coincide, then from (15) we find that -monomial is absent in . From (15) and (16), we have . It follows that, in , as in , there are no monomials of degree .Next, we consider the absence of -monomial in element of the matrix . Denote by and , respectively, the lower coefficient in and -coefficient in . Above the matrix , we carry out the transformation of the type III. Here, we put in the left transformation matrix (see (9)). The elements of the obtained reduced matrix satisfy the congruences of the form (11)–(13) (with indicated here). It can be seen from (12) that -monomial is absent in . Also -coefficient in will remain zero since . As can be seen from (40), in , as in , -monomial is absent since .The existence of the required matrix with condition (2) is proved.(3)Let for and in be absent monomial of degree . In the first step, we apply to the matrix transformation of the type I with the left transformative matrix (see (9)), in which , where and are, respectively, the lower coefficient in the and the -coefficient in . As a result, we obtain a reduced matrix of the form (3) whose elements satisfy the conditions of the form (20)–(22) (with selected here). From (20), it is seen that, in , the -monomial is absent. From (20) and (22), it can be written aswhere . From the last congruence, it can be seen that -monomial is absent in as in .Let already in not contain -monomial. Denote by and , respectively, the lower coefficient in and the -coefficient in and let . In the second step, with the help of the transformation of the type III with the specified in the left transformative matrix (see (9)), we pass from to some reduced matrix of the form (3). For elements of the matrix , conditions (11)–(13) (with the specified here ) are satisfied. From (11), it follows that, in , there is no -monomial. In addition, does not contain -monomial. On the basis of (11) and (13), we can write the congruence of the form (41) in which . It shows that, in , in comparison with , the zero coefficient of -monomial is preserved. This proves the existence for the matrix a semiscalarly equivalent reduced matrix with condition 3.(4)Suppose that conditions , , are satisfied in the reduced matrix . If in (33), then the desired matrix is and everything is already proven. Otherwise, in the first step, we fix in the matrix the first nonzero row and the corresponding row in . Let consist of -coefficients and be the -rd row in . We find an arbitrary solution of the equationWe apply to a semiscalarly equivalent transformation with the left transformative matrix of the form (4). At the same time, in , we put , , and . The elements , , of the obtained reduced matrix of the form (3) satisfy the congruence:Depending on which of the matrices , , or (see (34)) row belongs, let us consider the congruence (43), respectively. By comparing the -coefficients in both parts of that congruence, we conclude that the -th element of the first column of matrix (35) is zero. In addition, all rows in , which precede the -th, coincide with the corresponding rows of the matrix .
If , then everything is already proven. Matrix is the desired one. Otherwise, we assume that the -th element of the first column of the matrix is zero. In the second step, we fix in the first linearly independent of row , as well as the corresponding to it row in and the degree of monomials, the coefficients of which form these rows. Also let be the -th row in .
We find some solution of the equationWe apply to a semiscalarly equivalent transformation with the left transformative matrix of the form (4), putting , , and . We obtain a reduced matrix of the form (3).
Again, as in the previous step, we consider one of the congruences (43) depending on which of the matrices , , or (see (34)) contains row . In both parts of this congruence, we compare the coefficients of the -monomials and conclude that the -th element of the first column of the matrix (34) is equal to zero. Also from this and the previous congruences, we get that every row preceding the -th in coincides with the corresponding row in . If , then everything is already proven. Then, matrix is the desired one. Otherwise, in order to not introduce new designations, we assume that the first column of matrix has zero -th and -th elements. In matrix , we fix the -th row, which is the first linearly independent of , (). Let this be line . To him, corresponds to . Also let be the exponent that corresponds to these rows. We find the (unique) solution of the equationWe apply to a semiscalarly equivalent transformation with the left transformation matrix of the form (4) putting , , and . We obtain the matrix . The above considerations show that is the desired matrix.

3.2. Uniqueness of the Matrix in Theorem 2

Suppose that, for the reduced matrices , of forms (2) and (3), we have . Suppose also that elements , of these matrices do not contain -monomials if , and in polynomials , , there are no -monomials if . Let us first show that the matrix in the transition from to can be selected in the formif , or in the formif .

Indeed, the elements of the matrices , satisfy the congruence

If we compare the coefficients of the monomers of degree in both parts of this congruence, we get . Also, from equivalence , it is easy to get congruence

If we compare the coefficients of the monomers of degree in both parts of the last congruence, then we come to :(1)In case , , the transition matrix from to has the form (46) and (47) simultaneously. Therefore, we have . Elements , in , satisfy (11). From here, we get . For this reason, matrices 1 and 2 coincide.(2)Since , then matrix of the transition from to has the form (47), and the elements , in , satisfy (25). In , , there are no - and -monomials, so from (25), we get . So, .(3)If , then the matrix of the transition from to has the form (46). Elements , in , satisfy the congruence (28). From it, we have , since in , as in , there are no -and -monomials. Therefore, in this case, , coincide.(4)Suppose that matrix satisfies condition 4, that is, in , the elements of the first column corresponding to the maximum system of the first linearly independent rows of the submatrix are zero. Suppose that matrix also has the same property, and in addition, condition holds. Then, the elements , , , of these matrices satisfy the congruences (43). If in , we have , then

Therefore, as can be seen from (43), , .

If in , we have , and is the number of the first nonzero row in , then the first elements in the first column of the matrix coincide with the corresponding elements in the matrix ; moreover, -th elements are zero. Therefore, in , the first rows coincide with the corresponding rows of the matrix . In addition, from congruences (43), we have If the next after row in (or in ) is linearly dependent on , then

Then, from (43), we obtain that the first elements in the first column of the matrix coincide with the corresponding elements in . If and are linearly independent, then (51) is still satisfied since in this case, the -th elements in the first columns of matrices and are zero. Then, the th row in coincides with the corresponding row of the matrix . We think of this row in the same way as it was done above with row . Let be the first linearly independent of row and be its number in . Then, this row coincides with the -th row in , and the first elements of the first column in coincide with the corresponding elements in , with th elements being zero. Then from (43), we have . If is the -th row in , then the corresponding -th row in is also . If is linearly dependent on the system , , thenand the -th elements in the first columns of matrices , coincide. Otherwise, these elements also coincide because they are null. Continuing our considerations, we show that, in , , the first columns coincide, or at some steps, we will get . In each case, . Theorem is proved.

Example 1. Matrices , , and are semiscalarly equivalent. In this case, is a reduced, and is a canonical matrix for .

4. Conclusion

The matrices , whose existence is established in Theorems 1 and 2, can be considered canonical in the class of semiscalarly equivalent matrices. The method of their construction follows from the proof of the first parts of these theorems. This completes the study of semiscalar equivalence of third-order polynomial matrices with one characteristic root, started in the previous works of the author.

The results obtained in this article, as well as the results of the works cited here, are applicable to the study of the simultaneous similarity of sets of numerical matrices. In this context, the works of [69] should be noted. These results also have utility in solving Sylvester-type matrix equations over polynomial rings. Such equations often arise in applied problems.

Data Availability

Data from previous studies were used to support this study. They are cited at relevant places within the text as references.

Conflicts of Interest

The author declares that there are no conflicts of interest.