International Journal of Analysis

Volume 2017, Article ID 6701078, 14 pages

https://doi.org/10.1155/2017/6701078

## Toeplitz Matrices in the Problem of Semiscalar Equivalence of Second-Order Polynomial Matrices

Department of Algebra, Pidstryhach Institute for Applied Problems of Mechanics and Mathematics, National Academy of Sciences of Ukraine, Lviv 79060, Ukraine

Correspondence should be addressed to B. Z. Shavarovskii; moc.liamg@iiksvoravahsb

Received 21 June 2017; Accepted 24 September 2017; Published 26 October 2017

Academic Editor: Shwetabh Srivastava

Copyright © 2017 B. Z. Shavarovskii. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We consider the problem of determining whether two polynomial matrices can be transformed to one another by left multiplying with some nonsingular numerical matrix and right multiplying by some invertible polynomial matrix. Thus the equivalence relation arises. This equivalence relation is known as semiscalar equivalence. Large difficulties in this problem arise already for 2-by-2 matrices. In this paper the semiscalar equivalence of polynomial matrices of second order is investigated. In particular, necessary and sufficient conditions are found for two matrices of second order being semiscalarly equivalent. The main result is stated in terms of determinants of Toeplitz matrices.

#### 1. Introduction

Let be a field of complex numbers and the ring of polynomials in an indeterminate over . Let and denote the algebras of matrices over and , respectively, and , their corresponding groups of units. Given two matrices , the question of determining whether there exist the matrices , , such that has attracted much attention for many years. This proved to be a bigger problem than originally anticipated. Large difficulties in this problem arise already for elements of . The matrices are called semiscalarly equivalent, if the equality (1) is satisfied for some nonsingular matrix and for some invertible matrix [1] (see also [2]). Due to this fact the problem of finding the conditions under which the matrices are semiscalarly equivalent is of current interest. In this paper the indicated problem for the matrices of second order is solved. Toeplitz matrices plays an important role in the conditions under which two matrices of second order can be transformed to one another by semiscalar equivalent transformation. In spite of Toeplitz matrices forming a special matrix class, many classical problems related Laurent series, moment’s problem, orthogonal polynomials, and others are reduced to them. A more serious interest to Toeplitz matrices has, to a large extent, the following explanation. Any matrix has connection with Toeplitz matrices in the sense that every matrix can be represented as a sum of the products of Toeplitz matrices. Much applied problems of electrodynamics, geophysics, acoustics, and automatic control require investigation of Toeplitz matrices. Also, there is a correspondence between complex functions and Fourier series and the latter is closely related to some sequence of Toeplitz matrices. The monographs [3–5] present plenty of material about the research of Toeplitz and Hankel matrices. The articles [6–8] refer to modern problems of these matrices. The results of this paper may be applied in the solving of the matrix equations, which are utilized in many problems of engineering.

#### 2. Preliminaries

Let . If matrices and are semiscalarly equivalent then it is necessary for them to satisfy the following condition: , , . If the matrices and are of full rank, then according to [1] (see also [2]), they are semiscalarly equivalent to the lower triangular matrices and , respectively, with the invariant multipliers of the main diagonal. Similar results are published in [9, 10]. We may assume, without loss of generality, that first invariant multipliers of matrices and are identities. Therefore, these matrices can be considered in the form Denote by and the values at of the -th derivatives of and , respectively, in the matrices and . The determinant is called the characteristic polynomial and its roots are called the characteristic roots of the matrix (resp., for matrix ). Let us denote by the set of characteristic roots of matrix of the form (2). Now consider a partition of the set into subsets such that if . Evidently, each two different subsets of the partition (3) are disjoint.

The following two assertions are valid.

Proposition 1. *The partition (3) of the set of characteristic roots of matrix of the form (2) is invariant for the class of semiscalarly equivalent matrices.*

*Proof. *Let the matrices , of the form (2) be semiscalarly equivalent; that is, the equalitywhere , , holds. We deduce from (4) the relations Setting and , , , we obtain the relationsIf , then left sides of the resulting relations are equal. Therefore, from the equality of right sides, taking into account that (see (5), (6)), we have . The notion of semiscalar equivalence is a symmetric relation. Then by the similar argument yields . This completes the proof.

Proposition 2. *Let be the characteristic root of the multiplicity of matrix of the form (2). Let also be the lowest (nonzero) order of the nonzero derivative of the entry of this matrix at . Then the number (as ) is an invariant for the class of semiscalarly equivalent matrices, if .*

*Proof. *Let the matrices , of the form (2) be semiscalarly equivalent and be the lowest (nonzero) order of the nonzero derivative of the entry of matrix at . Suppose that . From relations (5) and (7), we obtainSubstituting into (9), we findTaking -the derivative of (9) at , we obtain Dividing both sides of obtained equality by and the substituting in (10) yields But this is impossible, since matrix is nonsingular. Therefore, . Inasmuch the semiscalar equivalence is a symmetric relation, we have . The Proposition is proved.

Next it is assumed that in the partition (3). The simplest cases are when or may be found in some other articles of the author (see, for example, [11, 12]). We may take , for entries of , if . By Proposition 2 for the matrices and being semiscalarly equivalent, it is necessary that , when , .

We use the following notations: is a degree of characteristic polynomial of the matrices of the form (2); is the multiplicity of characteristic root of these matrices; is the lowest (nonzero) order of the nonzero derivative of . If , then by Proposition 2 the semiscalar equivalence of and implies that , , , for the entry of . For entries of and for their arbitrary characteristic root we denote by the following quantities: .

The following cases are possible:

*The Case 1. * for every root .

*The Case 2.* There is a root such that and for every root .

*The Case 3.* There is a root such that .

Let us now consider each of them separately.

#### 3. The Case 1

Let us now formulate the following Theorem based on the defined notation and the assumption.

Theorem 3. *Let the partition of the set of characteristic roots of matrix (2) be of the form (3) and , , . Also suppose the entry of the matrix satisfies the condition and for each root . The matrix in the class of semiscalarly equivalent matrices is determined up to constant factor of the row *

*Proof. *Let the entry of the matrix of the form (2) satisfy the condition (see notations (13)). If the matrices and are semiscalarly equivalent, then the equality (4) implies andwhere . Substituting in the congruence (15), we obtain the system of equalitieswhere . By excluding in the system (16), we find that where .

Conversely, the quality (17), after some transformations, can be written in the form Introducing the notations based on the equalities (18), we obtain the system (16). Since for each root then every term of degree in the binomial decomposition of the entries of the matrices vanishes in the powers of . This and the system (16) mean that congruence (15) is valid. This implies that Denoting and , , , it is easy to make sure that the equality (4) holds true, where , (see (19)), and . This means that the matrices and are semiscalarly equivalent. The Theorem is proved.

#### 4. The Case 2

Further we shall retain the earlier introduced notations (see, in particular, (13)).

Theorem 4. *Let the partition of the set of characteristic roots of matrices , (2) be of the form (3) and , , . Let the entries , of this matrices satisfy the condition . Let also for every root and for some roots . Matrices , are semiscalarly equivalent if and only if there exists a number ; the following conditions hold:*(1)* for every root such that ;*(2)* for every root such that and ;*(3)*.*

*Proof. * *Necessity*. If the matrices and are semiscalarly equivalent, then from congruence (15) we obtain the system of equalitiesfor every root such that . The condition implies the condition () of Theorem, for .

From the congruence (15) we can write the system of equations for every root such that and . It is understood that . From first equation of system (22) we find that Substituting it into second and every succeeding equalities we obtainThe condition () of the Theorem is proved.

As in the proof of Theorem 3, from congruence (15) by virtue of the substitution , , we can easily obtain the equalities (16). By excluding , we arrive at condition () of Theorem. The necessity of the conditions – of Theorem is proved.*Sufficiency.* Suppose that the conditions of Theorem are satisfied. If we introduce the notations , then the condition () denotes the equalities (21) being satisfied for every root such that . From this it follows immediately the congruence where is arbitrary number, for every root (but not necessarily ).

From the condition () of Theorem follows the equalities (18). If we introduce the notations (19), then from (18) we can obtain the system (16). This means that the congruence is valid for every root such that .

Using the notations (19) we can proceed from condition () of Theorem to the system of equations (22). Last system is equivalent to the congruencefor every root such that and . When it is considered the congruence (26), then from (27) we actually havefor every root (but not necessarily ). Combining (25) with (28), we obtain the congruence (15). We complete the proof of Theorem in a way analogous to the end of the proof of Theorem 3.

*5. Auxiliary Statements*

*In the following studies we need to use Lemmas 5 and 6 that we prove in the current section.*

*Lemma 5. The matrix equation over , where , has nonzero solutions if and only if the following conditions are satisfied:where , . If the conditions of Lemma are satisfied then for every nonzero solution of (29) .*

*Proof. * *Necessity.* Let (29) have the solution . Then We assume that or . Then, from first equalities of (31) and (32) we obtain . This contradicts the assumption that solution is nonzero. So, we have . From equalities (31) we obtain the conditions (30), where , for .

We conclude from (32), if , then and for . From this it follows that equality (30) holds true for . For this reason in the following assume that , where . From the first and second equalities (32) by excluding , we obtain If then . This means that conditions (30) are satisfied for . If , then , and by multiplication of the both sides by in (33) we find Denote by , the submatrices obtained, respectively, from matrices by obliterating of two last columns and -th and -th rows. Also denote by , the determinants of matrices (35), respectively. Decompose them by the minors of order two that are contained in the last two columns. Because (for ), we have Since each summand of expression in parenthesis for , possibly except the first two, differs from the corresponding summand for by the multiplier . From this fact and from the equality (34) for follows equality (30).

Denote by and the determinants in the left and right sides of equality (30), respectively. Suppose by induction for all such that . For the sake of determinacy we assume that . In the case when the proof is not different in principle. From the first equalities (32), by excluding and by sufficiently evident transformations, we arrive at the systemwhere .

If we add left sides of equalities (38) and separate right sides, we obtainGathering similar terms in both sides of obtained equalities, we obtainIt follows from (32) thatFrom this relation it is easy to say that the following equality holds:From (31) and the induction hypothesis, we can write Comparing (40), (42), and (43), we obtain equality that is, , where . The necessity of conditions of Lemma is proved.*Sufficiency*. Consider the equalities (31) and (32) as one system of equations in three indeterminate . In conditions (30) . This means that , satisfy the first equation in the system (31). From condition (30) with it follows that , satisfy the second equation in system (31). Next, third and all following equalities in system (31) for , can be recurrently obtained from conditions (30) with . Evidently, the values satisfy the first equation (32). We compute both determinants in equality (30) with . After the annihilation of equal summands on both sides of obtained equality and after division by using the of simple transformations, we can obtain the following relation: This means that (45) satisfies the second equation of system (32).

Assume by induction that (45) satisfies first equations of system (32), that is, While proceeding we may assume . In the opposite case the proof is completely analogous. Taking into account the conditions (30) and the assumption of induction we can write the equalities (42), (43), and (44). From these equalities we obtain the relation (40). This relation implies the equality (39). It is evident that from the second and all following equalities of (47) we find that first equalities of (38) are valid. The first equalities of (38) along with relation (39) yield the last equality of (38). Dividing by and after some simplifications this equality can be written in the formThis means that (45) is the solution of -th equation of system (32).

We have inductively proved the existence of the nonzero solution (45) of systems (31) and (32). Thus, matrix equation (29) has nonzero solution. Lemma is proved.

*Lemma 6. The matrix equation over , where , , , has nonzero solutions if and only if the following conditions are satisfied:If the conditions of Lemma are satisfied then every nonzero solution of the equation (49) has .*

*Proof. * *Necessity.* The equality (50) for holds true trivially. Let (49) have nonzero solution. Then the matrix of this equation has rank less than . The two first rows of the matrix are linearly independent. Then, each of the following rows of the matrix linearly depends on these first two rows. Because we have . This implies the equality (50) for . From the equality we have where, after division of both sides by , we obtain equality (50) for . Assume by induction that equality (50) holds true for all . Consider the equality