Abstract

We consider the problem of determining whether two polynomial matrices can be transformed to one another by left multiplying with some nonsingular numerical matrix and right multiplying by some invertible polynomial matrix. Thus the equivalence relation arises. This equivalence relation is known as semiscalar equivalence. Large difficulties in this problem arise already for 2-by-2 matrices. In this paper the semiscalar equivalence of polynomial matrices of second order is investigated. In particular, necessary and sufficient conditions are found for two matrices of second order being semiscalarly equivalent. The main result is stated in terms of determinants of Toeplitz matrices.

1. Introduction

Let be a field of complex numbers and the ring of polynomials in an indeterminate over . Let and denote the algebras of matrices over and , respectively, and , their corresponding groups of units. Given two matrices , the question of determining whether there exist the matrices , , such that has attracted much attention for many years. This proved to be a bigger problem than originally anticipated. Large difficulties in this problem arise already for elements of . The matrices are called semiscalarly equivalent, if the equality (1) is satisfied for some nonsingular matrix and for some invertible matrix [1] (see also [2]). Due to this fact the problem of finding the conditions under which the matrices are semiscalarly equivalent is of current interest. In this paper the indicated problem for the matrices of second order is solved. Toeplitz matrices plays an important role in the conditions under which two matrices of second order can be transformed to one another by semiscalar equivalent transformation. In spite of Toeplitz matrices forming a special matrix class, many classical problems related Laurent series, moment’s problem, orthogonal polynomials, and others are reduced to them. A more serious interest to Toeplitz matrices has, to a large extent, the following explanation. Any matrix has connection with Toeplitz matrices in the sense that every matrix can be represented as a sum of the products of Toeplitz matrices. Much applied problems of electrodynamics, geophysics, acoustics, and automatic control require investigation of Toeplitz matrices. Also, there is a correspondence between complex functions and Fourier series and the latter is closely related to some sequence of Toeplitz matrices. The monographs [35] present plenty of material about the research of Toeplitz and Hankel matrices. The articles [68] refer to modern problems of these matrices. The results of this paper may be applied in the solving of the matrix equations, which are utilized in many problems of engineering.

2. Preliminaries

Let . If matrices and are semiscalarly equivalent then it is necessary for them to satisfy the following condition: , , . If the matrices and are of full rank, then according to [1] (see also [2]), they are semiscalarly equivalent to the lower triangular matrices and , respectively, with the invariant multipliers of the main diagonal. Similar results are published in [9, 10]. We may assume, without loss of generality, that first invariant multipliers of matrices and are identities. Therefore, these matrices can be considered in the form Denote by and the values at of the -th derivatives of and , respectively, in the matrices and . The determinant is called the characteristic polynomial and its roots are called the characteristic roots of the matrix (resp., for matrix ). Let us denote by the set of characteristic roots of matrix of the form (2). Now consider a partition of the set into subsets such that if . Evidently, each two different subsets of the partition (3) are disjoint.

The following two assertions are valid.

Proposition 1. The partition (3) of the set of characteristic roots of matrix of the form (2) is invariant for the class of semiscalarly equivalent matrices.

Proof. Let the matrices , of the form (2) be semiscalarly equivalent; that is, the equalitywhere , , holds. We deduce from (4) the relations Setting and , , , we obtain the relationsIf , then left sides of the resulting relations are equal. Therefore, from the equality of right sides, taking into account that (see (5), (6)), we have . The notion of semiscalar equivalence is a symmetric relation. Then by the similar argument yields . This completes the proof.

Proposition 2. Let be the characteristic root of the multiplicity of matrix of the form (2). Let also be the lowest (nonzero) order of the nonzero derivative of the entry of this matrix at . Then the number (as ) is an invariant for the class of semiscalarly equivalent matrices, if .

Proof. Let the matrices , of the form (2) be semiscalarly equivalent and be the lowest (nonzero) order of the nonzero derivative of the entry of matrix at . Suppose that . From relations (5) and (7), we obtainSubstituting into (9), we findTaking -the derivative of (9) at , we obtain Dividing both sides of obtained equality by and the substituting in (10) yields But this is impossible, since matrix is nonsingular. Therefore, . Inasmuch the semiscalar equivalence is a symmetric relation, we have . The Proposition is proved.

Next it is assumed that in the partition (3). The simplest cases are when or may be found in some other articles of the author (see, for example, [11, 12]). We may take , for entries of , if . By Proposition 2 for the matrices and being semiscalarly equivalent, it is necessary that , when , .

We use the following notations: is a degree of characteristic polynomial of the matrices of the form (2); is the multiplicity of characteristic root of these matrices; is the lowest (nonzero) order of the nonzero derivative of . If , then by Proposition 2 the semiscalar equivalence of and implies that , , , for the entry of . For entries of and for their arbitrary characteristic root we denote by the following quantities: .

The following cases are possible:

The Case  1. for every root .

The Case  2. There is a root such that and for every root .

The Case  3. There is a root such that .

Let us now consider each of them separately.

3. The Case  1

Let us now formulate the following Theorem based on the defined notation and the assumption.

Theorem 3. Let the partition of the set of characteristic roots of matrix (2) be of the form (3) and , , . Also suppose the entry of the matrix satisfies the condition and for each root . The matrix in the class of semiscalarly equivalent matrices is determined up to constant factor of the row

Proof. Let the entry of the matrix of the form (2) satisfy the condition (see notations (13)). If the matrices and are semiscalarly equivalent, then the equality (4) implies andwhere . Substituting in the congruence (15), we obtain the system of equalitieswhere . By excluding in the system (16), we find that where .
Conversely, the quality (17), after some transformations, can be written in the form Introducing the notations based on the equalities (18), we obtain the system (16). Since for each root then every term of degree in the binomial decomposition of the entries of the matrices vanishes in the powers of . This and the system (16) mean that congruence (15) is valid. This implies that Denoting and , , , it is easy to make sure that the equality (4) holds true, where , (see (19)), and . This means that the matrices and are semiscalarly equivalent. The Theorem is proved.

4. The Case  2

Further we shall retain the earlier introduced notations (see, in particular, (13)).

Theorem 4. Let the partition of the set of characteristic roots of matrices , (2) be of the form (3) and , , . Let the entries , of this matrices satisfy the condition . Let also for every root and for some roots . Matrices , are semiscalarly equivalent if and only if there exists a number ; the following conditions hold:(1) for every root such that ;(2) for every root such that and ;(3).

Proof.
Necessity. If the matrices and are semiscalarly equivalent, then from congruence (15) we obtain the system of equalitiesfor every root such that . The condition implies the condition () of Theorem, for .
From the congruence (15) we can write the system of equations for every root such that and . It is understood that . From first equation of system (22) we find that Substituting it into second and every succeeding equalities we obtainThe condition () of the Theorem is proved.
As in the proof of Theorem 3, from congruence (15) by virtue of the substitution , , we can easily obtain the equalities (16). By excluding , we arrive at condition () of Theorem. The necessity of the conditions of Theorem is proved.
Sufficiency. Suppose that the conditions of Theorem are satisfied. If we introduce the notations , then the condition () denotes the equalities (21) being satisfied for every root such that . From this it follows immediately the congruence where is arbitrary number, for every root (but not necessarily ).
From the condition () of Theorem follows the equalities (18). If we introduce the notations (19), then from (18) we can obtain the system (16). This means that the congruence is valid for every root such that .
Using the notations (19) we can proceed from condition () of Theorem to the system of equations (22). Last system is equivalent to the congruencefor every root such that and . When it is considered the congruence (26), then from (27) we actually havefor every root (but not necessarily ). Combining (25) with (28), we obtain the congruence (15). We complete the proof of Theorem in a way analogous to the end of the proof of Theorem 3.

5. Auxiliary Statements

In the following studies we need to use Lemmas 5 and 6 that we prove in the current section.

Lemma 5. The matrix equation over , where , has nonzero solutions if and only if the following conditions are satisfied:where , . If the conditions of Lemma are satisfied then for every nonzero solution of (29) .

Proof.
Necessity. Let (29) have the solution . Then We assume that or . Then, from first equalities of (31) and (32) we obtain . This contradicts the assumption that solution is nonzero. So, we have . From equalities (31) we obtain the conditions (30), where , for .
We conclude from (32), if , then and for . From this it follows that equality (30) holds true for . For this reason in the following assume that , where . From the first and second equalities (32) by excluding , we obtain If then . This means that conditions (30) are satisfied for . If , then , and by multiplication of the both sides by in (33) we find Denote by , the submatrices obtained, respectively, from matrices by obliterating of two last columns and -th and -th rows. Also denote by , the determinants of matrices (35), respectively. Decompose them by the minors of order two that are contained in the last two columns. Because (for ), we have Since each summand of expression in parenthesis for , possibly except the first two, differs from the corresponding summand for by the multiplier . From this fact and from the equality (34) for follows equality (30).
Denote by and the determinants in the left and right sides of equality (30), respectively. Suppose by induction for all such that . For the sake of determinacy we assume that . In the case when the proof is not different in principle. From the first equalities (32), by excluding and by sufficiently evident transformations, we arrive at the systemwhere .
If we add left sides of equalities (38) and separate right sides, we obtainGathering similar terms in both sides of obtained equalities, we obtainIt follows from (32) thatFrom this relation it is easy to say that the following equality holds:From (31) and the induction hypothesis, we can write Comparing (40), (42), and (43), we obtain equality that is, , where . The necessity of conditions of Lemma is proved.
Sufficiency. Consider the equalities (31) and (32) as one system of equations in three indeterminate . In conditions (30) . This means that , satisfy the first equation in the system (31). From condition (30) with it follows that , satisfy the second equation in system (31). Next, third and all following equalities in system (31) for , can be recurrently obtained from conditions (30) with . Evidently, the values satisfy the first equation (32). We compute both determinants in equality (30) with . After the annihilation of equal summands on both sides of obtained equality and after division by using the of simple transformations, we can obtain the following relation: This means that (45) satisfies the second equation of system (32).
Assume by induction that (45) satisfies first equations of system (32), that is, While proceeding we may assume . In the opposite case the proof is completely analogous. Taking into account the conditions (30) and the assumption of induction we can write the equalities (42), (43), and (44). From these equalities we obtain the relation (40). This relation implies the equality (39). It is evident that from the second and all following equalities of (47) we find that first equalities of (38) are valid. The first equalities of (38) along with relation (39) yield the last equality of (38). Dividing by and after some simplifications this equality can be written in the formThis means that (45) is the solution of -th equation of system (32).
We have inductively proved the existence of the nonzero solution (45) of systems (31) and (32). Thus, matrix equation (29) has nonzero solution. Lemma is proved.

Lemma 6. The matrix equation over , where , , , has nonzero solutions if and only if the following conditions are satisfied:If the conditions of Lemma are satisfied then every nonzero solution of the equation (49) has .

Proof.
Necessity. The equality (50) for holds true trivially. Let (49) have nonzero solution. Then the matrix of this equation has rank less than . The two first rows of the matrix are linearly independent. Then, each of the following rows of the matrix linearly depends on these first two rows. Because we have . This implies the equality (50) for . From the equality we have where, after division of both sides by , we obtain equality (50) for . Assume by induction that equality (50) holds true for all . Consider the equalityAfter the calculation of the determinant on the left, we obtain the equationWe can divide by both sides to obtain the equalityLet us denote by and the determinants in the left and right sides of the equality (50), respectively. Considering the inductive assumption, we can write the equalities Decompose the determinants and for entries of their last columns. Then the left and right sides of equality (50) for can be written in the formIt is easy to make sure that the following identity is true:We add the left sides of equalities (57) and (58) and multiply by the left side of the equality (56) and separately add right sides of the these equalities. Taking into account the expression (59), (60) for , and the identity (61), we obtain . This means that the equality (50) for is true. The necessity of the conditions of Lemma is proved.
Sufficiency. Let the conditions (50) for be satisfied. Then the submatrix formed by first rows of the matrix of (49) has rank less than . In fact, this rank is equal to , because (as has been stated above) the first two rows of this matrix are linearly independent. The equality (50) for implies that -th row of the matrix of equation (49) linearly depends on its first two rows. Our inductive assumption is the following. Let each of the first , , rows of the matrix of (49) linearly depend on its first two rows. We now reverse the order of arguments, as compared to the proof of the necessity, passing from relation (50) for to relation (56). This relation implies that the minor of order of the matrix of (49) that is contained in -st, -nd, and -th rows is equal to zero. This means that indicated rows are linearly dependent. The above argument inductively proves that the matrix of equation (49) has the rank . From this it follows that (49) has the nonzero solution. The rest of the Lemma will be proved by contradiction. Let be the nonzero solution of the equation (49), where or . Hence we have the equality or Since , the determinants of the matrices of these equalities are nonzero. Hence, or ; that is, contrary to the our assumption. Therefore in the nonzero solution of (49) necessarily . Lemma is proved.

6. The Case  3

The notations are the same as in the Cases  1 and  2 (in particular, see (13)).

Theorem 7. Let the partition of the set of characteristic roots of matrices , (2) be of the form (3), where . Let the entries , of these matrices satisfy the condition for root . Let also for some root . Matrices , are semiscalarly equivalent if and only if there exists a number such that the following conditions hold:(1) and , for every root such that ;(2)for every pair of the roots such that , ;(3)for every root such that and ;(4)for every pair of the roots such that ;(5)for every pair of the roots such that , and .

Proof.
Necessity. As we already know, from semiscalar equivalence of the matrices , it follows that the entries , of these matrices satisfy the congruence (15), where . Taking into account thatwe compare the coefficients of equal degrees of binomial on both sides of the congruence (15). Then we obtainFrom this it follows that and equality (64), where , for all roots such that .
Taking into account the roots such that and comparing the coefficients of equal degrees of binomial on both sides of the congruence (15), we obtain the system of equalities, which in the matrix form can be writtenSince , by Lemma 5 from the obtained equality we have that the condition () of Theorem is completely satisfied.
Let be the arbitrary pair of the roots such that , . For the coefficients of the compositions of types (69) and (70) of the entries , into degrees of binomials , from the congruence (15) we can write the relationsBy exclusion from these equalities and considering that , , we have equality (65). This proves the condition () of Theorem.
From the congruence (15) for the coefficients of the decomposition of types (69) and (70) of the entries , into degrees of binomial for root such that and can be written the systemwhere . Excluding from this system , we haveorwhere . This implies the equality (66) for the roots such that , , and .
If , then from congruence (15) the system of equalities can be obtained, which can be written in the matrix formwhere Since , by Lemma 6 from the obtained equality, we havewhere . Here for the equalities (76) hold. Now we multiply the left side of the equality (80) by the left side of the equality (76) at and carry out analogous operation with the right sides. As a result we obtain the equality (66). This proves the condition () of Theorem entirely.
For the arbitrary pair of roots , such that , , the congruence (15) impliesExcluding it follows the relation (67). The condition () of Theorem is proved.
Let be the pair of the roots such that , and . For the coefficients of the decomposition (69) and (70) and values from the congruence (15) the system of equalitiescan be obtained. From these equalities we exclude . Considering , we have relation (68). The necessity of the conditions of Theorem is completely proved.
Sufficiency. Let the conditions ()–() of Theorem be satisfied. Consider the matrix equationsfor the indeterminate vector-column . They are written using the coefficients of the decomposition (69) and (70) for some root such that . If , then the condition () implies that (83) has nonzero solution . Moreover this solution does not depend on the choice of the root such that . If , then by Lemma 5 both these equations (83) and (84) have the common nonzero solution. Evidently, those first rows of the matrices of these equations are linearly independent and any other of their rows are linear combination of these rows. Sincethis means that is the common nonzero solution of (83) and (84).
From the condition () we deduce the relationfor arbitrary pair of the roots such that , . This means that the solution (86) does not depend on the choice of the root satisfying the condition (). Hence it can be written the congruencewhere for every root . Clearly, here are included also the roots for which (if such exist).
Let the equality (66) hold true for some root such that and . The fulfillment of these equalities for means that . Note that the relations (80) follow from the equality (66) for . Therefore by Lemma 6 the equationwhere , , and are defined as in (78) and (79), has nonzero solution. Since the first two rows of the matrix of (90) are linearly independent andthen the columnis the solution of (90).
From the condition () it follows that the relation holds true for every pair of the roots such that . For this reason the solution (92) does not depend on the choice of the root such that . This means that the congruencewhere holds for every root , including the roots such that , if such exist.
From the condition () of Theorem can be easily obtained the relationfor every pair of the roots such that , , and . This means that the coefficients and in the congruences (88) and (94) coincide. This makes it possible to write the congruence (15), where .
The conclusion of the proof of Theorem can be fulfilled in the same way as of the Theorem 3. The Theorem is proved.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.