Abstract

A canonical form for a reduced matrix of order 3 with one characteristic root and with some zero subdiagonal elements is constructed. Thus, the problem of classification with respect to semiscalar equivalence of a selected set of polynomial matrices is solved.

1. Introduction

Let a matrix have a unit first invariant factor and only one (without taking into account multiplicity) characteristic root. Without loss of generality, we assume that this uniquely characteristic root is zero. Consider the transformation , where , . In accordance with [1] (see also [2]) matrices , are called semiscalarly equivalent (abbreviation: ss.e.; notation: ). In [3], the author proved that in the class , where , there exists a matrixwhich has the following properties:(i), , , (see Proposition 1 [3]);(ii), if (see Propositions 4, 5 [3]);(iii) and in the monomial of the degree is absent, if (see Propositions 6, 7 [3]).

Here denotes the junior degree of polynomial. Junior degree of polynomial , , is the least degree of the monomial (of nonzero coefficient) of this polynomial; notation . The monomial of degree and its coefficient are called the junior term and junior coefficient, respectively. Denote by symbol the junior degree of the polynomial . If both elements , of the matrix are nonzero, then we may take their junior coefficients to be identity elements. In the opposite case, we may take the junior coefficients of the nonzero subdiagonal elements of the matrix to be one. Such matrix in [3] is called the reduced matrix. The purpose of this paper is to construct the canonical form of the matrix in the class of ss.e. matrices. For this purpose we base our work on the reduced matrix of the form (1). This article is a continuation of the work [3]. The case is trivial. Then the Smith form is canonical for the matrix in the class . If , then . This case is considered in the author’s paper [4]. For this reason in the sequel we shall take . In this paper we consider the case, when some of the elements , , of the matrix are equal to zero and at least one of them is different from zero. Recall that the zero equality of some subdiagonal elements of the matrix is an invariant (see Proposition 2 [3]). A case in which all subdiagonal elements of the matrix are nonzero will be the subject of another study. It should be noted that the problem of ss.e. matrices of the second order is solved in the article [5]. Some aspects of this problem for the matrices of the arbitrary order are considered in the papers [69]. We also add that the work close to [1, 2] is [10, 11].

2. The Canonical Form of a Reduced Matrix with Two Zero Subdiagonal Elements

To abbreviate in the sequel the expressions “monomial of degree ” and “coefficient of the monomial of degree ” of some polynomial we will use “-monomial” and “-coefficient,” respectively. Further, by the symbol we often denote, besides the zero element of the field , the rectangular zero matrix if it is clear from the context what we have in mind. By we denote zero column of the necessary height.

Theorem 1. Let in the reduced matrix of the form (1) the conditions , for some index from set and for the rest , be fulfilled. Then , where in the reduced matrixthe element does not contain -monomial,, . The matrix is uniquely defined.

Proof. Existence. Let the conditions of the Theorem hold true. Ifthen in the -monomial is absent, sinceSuppose thatDenote -coefficient of the polynomial by . We will apply to the matrix a semiscalarly equivalent transformation (ss.e.t.) with left transforming matrix of one of the forms:if ;if , orif . In doing so every time instead of , , or in the transforming matrix we put , that is,In article [3], for the given matrices of the form (1) and one of the form (7), (8), or (9) is a method for finding the matrix of the form (2) such that . In doing so, we can find also the right transforming matrix. Depending on the form (7), (8), or (9) of the matrix we shall say that the ss.e.t.-I, ss.e.t.-II, or ss.e.t.-III is applied to matrix , respectively.
If and ss.e.t.-I is applied to matrix , then the elements and of the matrices and satisfy the following congruences:where . From (12) and (13) we have that . From (11) we obtain that and in the -monomial is absent.
If , then the matrix is obtained by means of ss.e.t-II. Then the elements of and fulfill the following congruences:where . From (14) and (16) it follows at once that , and . From (15) we have that and moreover does not contain the -monomial.
Let and we pass from to with the help of ss.e.t.-III. From ss.e. of the matrices and it follows that their elements satisfy the congruences:where . From (17) and (18) we obtain , . From (19) we have that and in the -monomial is absent. Evidently, the obtained matrix is reduced in each under consideration case to . The first part of the theorem is proved.
Observe that the equality , , each time follows from Corollaries 2, 3 [3].
Uniqueness. It suffices to prove for some of the cases: , , or . The proof in two other cases is analogous. Assume in the reduced matrices and of the forms (1) and (2) we have , , and in the -monomial is absent. If , then from we obtain the congruence (19), where . Comparing the -coefficients in both sides of this congruence, we obtain . From this it follows that , because . The theorem is proved completely.

3. The Canonical Form of a Reduced Matrix with One Zero Subdiagonal Element

Let us now consider the ss.e.t. with the left transforming matrix of one of the next forms:Such transformations are called ss.e.t-I-II, ss.e.t-I-III, and ss.e.t-II-III, respectively.

Let , , , in the reduced matrices and of the forms (1) and (2). We determine the polynomials and as follows:Next, we form the columns , , and from the coefficients of the polynomials , , and , respectively. In doing so we place on the first positions in these columns the coefficients of the monomials of degree . Below we arrange every succeeding coefficients in the order of growth of the degrees of the monomials, up to -coefficient. At the same time we do not omit zero coefficients. Take into consideration the polynomialsWith the coefficients of the polynomials , , , and we form the columns , , and , respectively, of the height . Here we place on the first positions the coefficients of monomials of degree . Further we put all coefficients of the monomials of higher degrees up to degree . Let us build for the matrices:Evidently, each column in these matrices is composed by the coefficients of the monomials of the same degrees. By complete analogy for we build the matrices:

Theorem 2. Let be the reduced matrix of the form (1), in which , , , and , . Then , where in the reduced matrix of the form (2) and the elements satisfy one of the next conditions.(1)In the -monomial is absent, if , .(2)In the - and -monomials are absent, if , .(3)In and , where , respectively, the - and -monomials are absent, if , .(4)In the first column of the matrix (26) with the coefficients of the polynomials , are zero elements, which correspond to the maximal system of the first linearly independent rows of the submatrix , if , . The matrix is uniquely defined.

Proof. Existence. (1) Let , , and identify -coefficient of the polynomial in the matrix . Let us apply to the matrix ss.e.t.-III. In so doing, we set in the left transforming matrix (see (9)). In the obtained reduced matrix the element satisfies congruence (19), where . From (19) we deduce that for the condition (1) of the theorem is fulfilled.
(2) Let , . We may take that the element of the matrix does not contain -monomial. In the opposite case we apply to the matrix ss.e.t.-III, as has been shown above in p. (1). Let us denote -coefficient of the polynomial by and let us apply to the matrix ss.e.t.-II. At that, we set in left transforming matrix (see (8)) . For the elements , of the obtained reduced matrix the congruence (16) holds true, where . For this reason it may be written in the formFrom the above it follows that in the -coefficient is zero. Since , then . Therefore, from the last congruence we see that in , as in , the -monomial is absent.
(3) Let , . We may take that in the -monomial is absent. We can always do this as has been shown above in p. (1). Let us denote by the -coefficient in . We subject to action of ss.e.t-I. In this case we set in the left transforming matrix (see (7)) . As a result we obtain the reduced matrix of the form (2), in which and the congruenceholds true. The last congruence implies that in the -monomial is absent. Then is the matrix to be found.
(4) Let , . If in the matrix (see (25)), then everything is proved; that is is the matrix to be found. If the matrix is nonzero, but its block is zero, then the element in has the required property. In this case the second column of the matrix is zero. Let the first nonzero row of the matrix and the corresponding row , , of the matrix be formed of -coefficients. Let us find some solution of the equation and use ss.e.t.-I-II to . Herewith we set in left transforming matrix of the form (20) , . The divisor of the element , , of the obtained reduced matrix satisfies the congruenceFrom the last one it follows that in the -monomial is absent and .
If , then everything is proved; that is is the matrix to be found. In the opposite case we take that already for the polynomial does not contain -monomial. Let the row of the matrix be the first linearly independent row of and is the respective row of the matrix . Let these rows be formed of the -coefficients. We find the (unique) solution of the equationBy ss.e.t-I-II we pass from to the reduced matrix . Here in the matrix (see (20)) we set , . The divisor of the element in , as is seen from the congruence (29), does not contain - and -coefficients.
If the matrix is nonzero, then the row of -coefficients is its first nonzero row. At the we apply to ss.e.t-II. In addition we set in the left transforming matrix of the form (8) in place of the coefficient of the monomial of degree in the polynomial (see p. (2)). In the obtained reduced matrix the polynomial in the position does not contain the -monomial. If , then we consider the row of the form formed of the -coefficients in the matrix, analogous to . This row is the first linearly independent of . We do the . This is ss.e.t-III of the obtained matrix after the first step. For this purpose we set -coefficient of the polynomial in the position of the obtained matrix after the first step instead in the left transforming matrix of the form (9) (see p. (1)). In the obtained matrix after the second step the polynomial in position does not contain - and -monomials. In order not to introduce the new notations, we will consider that in the matrix its element possesses this property. If , then in the submatrix of the matrix the row of the -coefficients is the first linearly independent row with the collection , . Then we make the . It is ss.e.t.-I of the matrix . For this purpose in the left transforming matrix of the form (7) instead of we set -coefficient in (see p. (3)). After this step we obtain the matrix in which the element in the position is not changing and the polynomial in the position does not contain -monomial. We obtain the required matrix.
If after the first step it turns out that , that is, but , then we immediately take the third step. Next, if , then the row of the -coefficients is the first linearly independent row with the collection , in the submatrix, analogous to . Then we take the . It is ss.e.t.-III with a left transforming matrix (see (9)), in which instead of is -coefficients of the polynomial in the position of the matrix obtained after the third step. After that, the element in the position of the resulting matrix will not change, and in position the desired element will stand. The obtained matrix is the required one.
Uniqueness. (1) Let the matrices , of the form (1), (2) satisfy the condition (1) and . Then the congruence (19), where , and the congruenceare fulfilled (see Corollary 1 and Remark [3]). If , then and from (19) we have , and from (31) we will obtain . If , then comparing -coefficients in both sides (19), we get . Therefore, in each case and coincide.
(2) Let condition (2) for the matrices and of the form (1) and (2) be fulfilled. Then on the basis of Corollary 1 and Remark [3] on we can write the congruences:It should be noted that . If , then . Then from (32) and (33) we have and , respectively. In the case of by comparison, in both parts (33) of the -coefficients we arrive at . If , then from (32) and (33), as before, we have and , respectively. If , then by comparing the -coefficients in both parts (33), we have . Thus, in this case too.
(3) For the reduced matrices , of the form (1), (2) such that , in the case of satisfying the condition (3) we have congruence (19) where , and congruenceFirst let us note that . If , then . Then from (34) and (19) we have and , respectively. If , then by comparing the -coefficients from (19) we have and . If further , then from (34) immediately follows. In the case we compare -coefficients in both parts (34) and come to . So .
(4) Suppose that for the reduced matrices , of the form (1), (2) which satisfy condition (4) we have . For their elements, taking into account Remark [3], we can write the congruence (33) and congruenceIf and , then and . Then the matrices , are zero and, as can be seen, from (33) and (35), we have and . Let but . Then, in , the submatrices , and the second columns are zero, and from (33), (35) we have , , respectively. Thus, in the columns , of the matrices , the corresponding first elements coincide and the first rows in the matrices , are zero. The first nonzero rows in these matrices are their -th rows. They are equal among themselves. We will denote them by . The corresponding elements in the columns , are zero (see (8)). Therefore, we have . This means that the following -th rows in , coincide. We will denote them by . From (35) it is clear that . If , are linearly independent, then the corresponding elements in , are zero. On this basis, from (35) we have . Therefore, and all is proved. If , are linearly dependent, then , and this, as can be seen from (35), means that in the columns , their -th elements coincide. Therefore, the following -th rows in the matrices , coincide. Let us denote them by and consider the cases where the rows , are linearly independent and when they are linearly dependent. In the first case, as above, we will get . Then everything is proved. In the second case we obtain that the -th elements in , coincide. Therefore, the -th rows in , coincide. Continuing similar consideration, at some step we will get . This on the basis of (35) means , that is , or we will come to . In any case we will have .
Let . Then in the columns , , as is seen from (33), the first elements coincide, and in the matrices , the first rows are zero. The first nonzero row in each of the matrices , is their -th row, which has the form . Since -th elements in the columns , are zero, then as is clear from (33), . This means, that the corresponding first , , elements in the columns , coincide. If , then the first linearly independent with the row in each of the matrices , is the same -th row . Since -th elements in , are zero, then from (33) we have . That means that . As can be seen from (35), in the columns , the corresponding first elements coincide. Then in the case we have ; that is, everything has already been proved. If , then in each of the matrices , , the first rows are linearly dependent on the collection , , and their -th row is the first linearly independent of , in these matrices. Since in the columns , their -th elements are zero, then from (35) we obtain . Then , and everything is proved.
If , then from (33) we have . Recall that in (33), (35) . Also , if . Therefore, if , then everything has been proved. Otherwise, if , then in the columns , the corresponding first elements coincide and in each of the matrices , their -th row is first linearly independent of the collection , . Since -th elements in , are zero, then from (35) we get . That is, every time . The theorem is proved.

Let in reduced matrices of the form (1), (2) , . Define the polynomials and as follows:We will construct columns , , and of height from the coefficients of the polynomials , , and , respectively. In these columns, in the first place, we put the coefficients of the monomials of degree . Then, in the order of increasing powers of monomials, we place the rest of the coefficients, together with zero ones up to -coefficients. We create columns , and of height from the coefficients of polynomials , and from the coefficients of polynomials:respectively. Here, in the first place, we put the coefficients of the monomials of degree , and then in the order of increasing powers of monomials we place the remaining coefficients up to the monomials of degree inclusive. For we construct matrices of the formObviously, each row in these matrices consists of the coefficient of the monomials of the same degree.

Quite similarly for we construct matrices:

Theorem 3. Let in the reduced matrix of the form (1) , and , . Then , where in the reduced matrix of the form (2) and the elements satisfy one of the following conditions.(1)In the -monomial is absent, if , .(2)In the - and the -monomials are absent, if , .(3)In the -monomial is absent and in the first of the polynomials , for which , the -monomial is absent, if , .(4)In the first column of the matrix (39) with the coefficients of the polynomials , there are zero elements corresponding to the maximal system of the first linearly independent rows of the submatrix , if , . The matrix is uniquely defined.

Proof. Existence. (1) The proof is completely analogous to the proof of condition (1) in Theorem 2.
(2) We can assume that in the matrix element does not contain -monomial. Otherwise, we act as in p. (1) of Theorem 2. If , then everything is proved. Otherwise, we denote the (nonzero) coefficient of the -monomial in by and apply to ss.e.t.-I. In this case, in the left transforming matrix (see (7)) we put . Elements , of the resulting reduced matrix satisfy the congruencefrom which it is seen that in , there is no -monomial. Since , then in , as in , there is no -monomial.
(3) Let and . If , then in there is no -monomial, since . Otherwise, we denote the (nonzero) coefficient of the -monomial in by and apply to ss.e.t.-II. Thus, in the left transforming matrix of the form (8) we put . In the resulting reduced matrix element satisfy the congruence (14) and . From (14) it is clear that in there is no -monomial. In order not to introduce new notation, we assume that already in the polynomial does not contain a -monomial. If , then everything has already been proved. Then the matrix is the desired one. Otherwise, we find the first of two values , , for what . Denote by the coefficient of the -monomial in the polynomial for this value (fixed now). Apply to ss.e.t.-III. Here in the left transforming matrix of the form (9) we put . As a result in the reduced matrix the element (for the above-defined index ) satisfies the congruenceFrom this congruence it follows that in there is no -monomial. If , then from the same congruence it is also evident that in , as in , there is no -monomial. The same thing is proved by the congruenceif .
(4) If , then everything has already been proved. Then the matrix is the desired one. If , but , then the element in satisfies condition (4). This element will not change with all subsequent ss.e.t. Then . This means that in the second column is zero. Let be the first nonzero row of the matrix and is the corresponding row of the matrix . Let these rows be composed of the coefficients of the monomials of degree . We find some solution of the equation and apply to ss.e.t.-I-II. In this case, we put , , in the left transforming matrix (see (20)). The element of the resulting reduced matrix coincides with , and the element satisfies the congruenceIt follows from this that in there is no -monomial and . If , then all subsequent rows in are linearly dependent on , then the matrix is to be sought. In order not to introduce new notations, we assume that the element in does not contain -monomial. Let be the first linearly independent row from the row of the matrix and let be the corresponding row of the matrix . Let these rows be composed of the coefficients of the monomials of degree . From equationwe find its (unique) solution . Using ss.e.t.-I-II we pass from to the reduced matrix . Here in the left transforming matrix of the form (20) we assume , . The element in , as is shown in (43), does not contain - and -monomials and . Therefore, is the desired matrix.
Next let us consider the situation when . Then, and the row of the -coefficients is the first nonzero row of the matrix . We do the . It is ss.e.t.-I of the matrix , in which we put the -coefficient of polynomial instead of in the left transforming matrix (see (7)). In the obtained reduced matrix , element does not contain a -monomial (see p. (2)). If , then the first linearly independent row with row in is a row of -coefficients, . We do the . This is ss.e.t.-III of the matrix , in which in the left transforming matrix of the form (9) we assume the -coefficient of polynomial instead of . In the matrix obtained after the second step, the element in position does not contain - and -monomials. In order not to introduce new notations for the matrices resulting from transformations, we assume that the element in has this property. If , then in the matrix the row of the -coefficients is the first linearly independent row with the collection , . To , apply ss.e.t.-II. In this case, in place of in the left transforming matrix (see (8)) we put the -coefficient of the polynomial taken with the opposite sign. This will be the . In the resulting matrix, the element in position does not contain a -coefficient. Since the element in position does not change at the same time, such a matrix is the desired one.
If after the first step it turns out that , but , then we do the third step. Further, if , then the row of the -coefficients in the matrix analogous to is the first linearly independent with the collection , . We do the . It will be the ss.e.t-III of the matrix obtained after the third step. In this case, in the left transforming matrix (see (9)) instead of , we put the -coefficient of the polynomial in the position . As a result, the element in the position will not change, and -element will be free from - and -monomials. The resulting matrix is the required one.
Uniqueness. (1) Let and the elements of the reduced matrices , of the form (1), (2) satisfy condition (1) of the theorem. Then, taking into account Corollary 1 and Remark [3], we can write the congruences:Since , then . If , then . Then from (45) and (46) we have and , respectively. Otherwise, when , comparing the -coefficients in both parts of the congruence (46), we arrive at . Therefore, in each case, .
(2) Let condition (2) of the theorem be satisfied for the reduced matrices , of the form (1), (2) and . Then we can write (45) and (46). If , then and . Then from (45) and (46) we have and , respectively. Otherwise, when , from (46) we obtain , since and in , there are no -monomials. If , then again from (45) and (46) we obtain the coincidence , from , , respectively. If , then comparing the -coefficients in both parts (46) we obtain . It means .
(3) Let the condition (3) of the theorem be satisfied in the reduced ss.e. matrices , of the form (1), (2). Then for the elements of these matrices we have the fulfillment of the congruence (46), where , and the congruenceIf , then from (47) we have . In any case, the second term in (47) can be omitted. Then (46) (where ) and (47) can be presented in the formIf for each , then immediately we have . Otherwise, if we compare the -coefficients in both parts (48) for the first of two values of the index such that , then we obtain . In any case .
(4) Let the reduced matrices of the form (1), (2) be ss.e. and let them satisfy condition (4) of the theorem. Then for their elements it is possible to write the congruence (46) and congruenceIf and , then in matrices we have and, as can be seen from (46), (49), , .
If and , then in matrices we have and the second columns in are zero. In this case, from (46) and (49) we have and , respectively. Therefore, in subcolumns of the matrices the first elements coincide and the corresponding rows in the matrices are zero. In these matrices their -th rows are the first nonzero rows. They are the same. We will denote them by . The elements in corresponding to these rows are zero (see (8)). So we really have . This means that the -th rows in the matrices coincide. We will denote them by . From (49) it is clear that . If are linearly independent, then in there are zero elements corresponding to . Therefore, on the basis of (49) we have . Hence and everything is proved. If are linearly dependent, then , and this on the basis of (49) means that the -th elements in coincide. Therefore, the following -th rows in coincide. Denote them as and again consider two situations: when are linearly independent and when they are linearly dependent. In the first case, we will have that . Then the proof is finished. In the second case we have that the -th elements in coincide. This means that in the -th rows coincide. Continuing similarly to our considerations, at some point we will obtain or eventually we will have . In any case .
Let us now consider . Then in columns , as can be seen from (46), their first corresponding elements coincide, and in matrices their first rows are zero. The first nonzero row in these matrices is a row . Since the -th elements in are zero (see (8)), then from (46) follows. This means that , where . If , then in the first corresponding elements coincide. Therefore, in the first corresponding rows coincide, and each of them is linearly dependent with . The first linearly independent row with in each of the matrix is their -th row . Since -th corresponding elements in are zero, it follows from (46) that and . If , then and all is proved. In another case, as can be seen from (49), in , the first corresponding elements coincide, and in each of the matrix each of their first rows is linearly dependent on the collection , . The first linearly independent row with the collection , in each of the matrices is one and the same -th row . Since -th elements in are zero, then from (49) follows. Then .
Let , and, as before, . Then from (46), where , we have at once . If , then from (49) follows. If , then in the corresponding first elements and in the corresponding first rows coincide. Each of the these rows is linearly dependent on the collection , . In each of the matrices their -th rows are the first linearly independent rows with the collection , . Since -th elements in are zero, then from (49), where , we will have . In any case . The theorem is proved.

Theorem 4. Let the elements of the reduced matrix of the form (1) satisfy conditions: , , , and , . Then, , where the elements of the reduced matrix of the form (2) satisfy conditions: , , , and in there are no - and -monomials. The matrix is uniquely determined.

Proof. Existence. If , then everything has been already proved. Then . Otherwise, apply to ss.e.t.-II. In this case, instead of in the left transforming matrix (see (8)) we assume the -coefficient in . Then we obtain a matrix of the form (2), which will also be reduced, and its elements , , , satisfy the congruence (15). From the latter it can be seen that in there is no -monomial and . If in the obtained matrix we have , then everything has been already proved. Then the matrix is the desired one. Otherwise apply to ss.e.t.-III. To do this, in the left transforming matrix (see (9)) instead of we put the -coefficient in , taken with opposite sign. The resulting matrix will be the sought one.
Uniqueness. Let , where , are reduced matrices of the form (1), (2), in which , , and in each of , there are no - and -monomials. For elements of the matrices , a congruence (11) and congruencesare fulfilled. We recall that according to the definition of the reduced matrix in there are no -monomials. If , then from (11) we obtain . The same is true from (50), if , but . Then . If , then from (51) we have . Similarly, if , then from (51) we obtain . Then . Note that from follows . Therefore, regardless of whether or , the coincidence and will be obtained from (51). The theorem is proved.

4. Conclusions

The matrix , established by each of Theorems 14, can be considered canonical for the class of ss.e. matrices. It can be applied to the classification of sets of numerical matrices (over the field ) with respect to simultaneous similarity. In this context, the work of [1215] should be noted. From the proof of Theorems 14 we can construct an algorithm for finding the transforming matrices (left nonsingular numerical and right invertible polynomial) that reduce to the canonical matrix .

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.