Oriented by Characteristic Roots Reduced Matrices in the Class of Semiscalarly Equivalent
A set of polynomial -matrices of simple structure has been singled out, for which a so-called oriented by certain characteristic roots reduced matrix is established in the class of semiscalarly equivalent. The invariants of such reduced matrices and the conditions of their semiscalar equivalence are indicated. The obtained results will also be applied to the problem of similarity of sets of numerical matrices.
By Theorem 1 (see Section 4 in  and ), matrix of full rank by transformationwhere and are reduced to the lower triangular form with invariant factors on the main diagonal. We consider the case when is a -matrix of simple structure and its first invariant factor is equal to one.
The latter condition does not limit the generality in the sense that everything stated in this article can be extended to -matrices, which after reduction (i.e., division of all their elements) by the first invariant factor become matrices of simple structure. By the definition of a matrix of simple structure, all its elementary divisors are linear. This concept is taken from , where it is introduced for polynomial matrices of arbitrary degree by analogy with the concept of numerical matrices (above the field) of simple structure with . Note that the numerical matrices of a simple structure in  are called simple and in  are nondefective. Next, we will use the terminology from [1, 2], namely, the transformation (1) and the matrices and will be called semiscalarly equivalent (abbreviation: ssk.e., notation: ). Our task is to construct in the class a lower triangular matrix with invariant factors on the main diagonal and some predefined properties, to find a complete system of its invariants and to establish on this basis the conditions of ssk.e. for two such matrices. This problem for -matrices in the general case is solved by the author in  and for -matrices with one characteristic root in .
2. Preliminary Information
In class , there is a matrix of the formwhere , , and divides and . Let us denote and . We assume that . Otherwise, in class can be chosen so that or . In these cases, the task of finding the conditions ssk.e. matrices are greatly simplified. Hereinafter, the notation and will mean the sets of roots of polynomials and , respectively. Recall that element is called the characteristic root of the matrix and for class is defined unambiguously (i.e., does not depend on the choice of matrix ). Obviously, in class can be chosen so that, for some , the following conditionis satisfied. In the future, this condition will be mandatory for and for all triangular matrices of class . Hereinafter, record will mean the greatest common divisor of polynomials and . We will use the corresponding notation for the greatest common divisor of three polynomials.
Proposition 1. The greatest common divisor for class does not depend on the choice of the matrix with a fixed characteristic root .
Proof. Let the matrixwhere , , and , belongs to class .
Then, for some matrices and , the equalityholds. From the latter, we can writeIf we put in (7) and (8) , we get and , respectively. Since divides and , then . Therefore, from (7) and (8), we have that divides and . Given that ssk.e. has the property of symmetry (moreover, ssk.e. is the ratio of equivalence), we can write that divides and . Therefore, . The proposition is proved.
In this study, we will limit ourselves to the case when . Then, and divides . Considering the opposite case, when , is the subject of another study.
3. Invariants of Triangular Matrices
Proposition 2. If , then for class does not depend on the choice of matrix .
Proof. Let (4) be some other matrix of class for which a condition similar to (3) holds. Since , then, from (6) and (7), we have and . Then, (8), where , can be written asSince , the latter shows that divides . Therefore, given the symmetry of the ratio nsk.e., we have that divides and finally . The proposition is proved.
We divide the set of roots of polynomial into subsets according to the following principle: we assign elements to the same subset if condition holds for . Thus, we get some partition of the set :
Proposition 3. Partition (10) of the set for class does not depend on the choice of matrix if .
Proof. Suppose that, in addition to , the matrix (4) with a condition similar to (3) also belongs to class . Then, for some and , equation (5) holds, from which we have (9) andwhere . Let belong to one of the subsets of (10), for example, to . If and is put alternately and the obtained equations are subtracted in (9), then we get .
Similarly, from (11), we obtain .
The last two equations are written in matrix form .
It is easy to see that .
Since in , for , we have , then . Therefore, . The proposition is proved.
If, in (10), , then the elements and in are determined to the nearest multiplier, and then, can be chosen so that . In this case, can be considered canonical in class . Therefore, we consider the case when .
4. Reduction to a Special Triangular Form
Proof. We can assume that . Otherwise, this is achieved by adding the second row of matrix multiplied by some number to its third row. If and , then, in the first step from , we move on to such a matrix (4) in class that . To do this, choose so that for every . Then, from the congruence , we find , . Obviously, and . Next, from the congruence , we find , . It is easy to see that and , since . Based on the above selected and obtained and , we construct matrices and (4), where , , and . We make sure that the constructed matrices , , together with the original matrix satisfy equality (5). In order not to enter new notations, we assume that conditions and are satisfied for . In the second step in class from , we pass to matrix with the properties specified in proposition. To do this, first, from equation , we find and then, from congruence, , we find , . According to the found , , we construct matrices and (4), where , , , and .
For the original matrix and constructed , , and , we check the truth of equality (5). Since , then is the required matrix in class . The proposition is proved.
Proof. If we put in (11) and , we have where we immediately get because . Then, (11) will take the formFrom (12), it is seen that , since . The proposition is proved.
Denote by and the subsets of the set , for elements and of which we have and , respectively. Propositions 2 and 5 imply the following.
Corollary 1. Subsets and for class for fixed do not depend on the choice of the matrix , which satisfies the conditions of Proposition 5.
Since we have imposed the condition that, in (10), , then is not an empty set. If is empty, then . In this case, the problem of ssk.e. for -matrices is reduced to a similar problem for -matrices. The latter, as mentioned in Section 1, is resolved in . Therefore, in the future, we assume that is not an empty set.
Let satisfy the conditions of Proposition 5. It is clear that one of the values and is nonzero. Let . We can also assume that, in , one of the following conditions is fulfilled: for some if the intersection is not empty or for some otherwise. Each of these conditions for is achieved by multiplying the first two rows and the first two columns by the corresponding numerical factors. Such a matrix is called oriented by characteristic rootsreduced matrix.
Proof. According to Proposition 5, the matrix in question is an upper triangular, and . Equations (9) and (12) hold for elements of matrices and . From them, we can get a congruence:If we put, in (13), , we immediately get . The proposition is proved.
5. Conditions of Semiscalar Equivalence of Reduced Matrices
Theorem 1. Let (2) and (4) be oriented by the same characteristic roots reduced matrices and , .
If, for the setsanddefined in the corollary, the intersectionis not empty, then the matrices and are ssk.e. if and only if the following conditions are met:(i)for each.(ii)Values, are simultaneously equal to zero or are nonzero for each.(iii)Iffor someorfor some, thenorfor each and .
If the intersectionis empty, then the matrices and are ssk.e. if and only if the following conditions are met:(1)Values and , as well as values and , are simultaneously equal to zero or are nonzero for each and .(2)Iffor some, thenfor every, and iffor some, thenfor each.
Proof (necessity). Let . Then, for the elements of matrices and are equations (9) and (12), in which according to Proposition 6 in the absence of intersection of sets and we have . Therefore, from (13), we immediately have (i). If in equations (9) and (12) put , , , and , where and mean an arbitrary element of and , respectively, the result can be written in the matrix form as follows:Hence, we haveFrom the last equality (ii), since . If for some , then, from equality,
For arbitrary and , we have each equality with (14), since . If for some , then, for any and , similarly we have each of equation (15).
Now, let the sets , not intersect and and . If we put , and , , respectively, in (9) and (12), we obtain equations that can be written asHence, we have
Since , then, from the last equality, we obtain (j). If , for some , then, from equality,We get (16) for each . Similarly, if , for some , then, from equality,We obtain (17) for every . The necessity is proved.
Let the sets , intersect. Consider (19) as an equation with the unknown . We mean that and acquire all values from sets and , respectively. Therefore, the matrix on the left side of this equation may have more rows than four. Under the conditions of the theorem, the rank of this matrix is equal to if for every and . Otherwise, when for some or for some , the rank of the matrix is . In each case, this equation has a solution with nonzero first two components . If we take into account (i), then, from (19), we can proceed to the equivalent equation (18). Since, as already mentioned, and acquire all values from and , respectively; based on the solution of equation (18), we can write such a system of congruences:where and . The first of the congruences is performed modulo because and divides and . Also, we can pass on module and in the second congruence after its multiplication by . So, we obtain .
From the elements of solution , we construct a matrix , where, as before, , and the matrix , where , , , , , , , , and .
For the constructed matrices and and for the original and , we are convinced of the truth of equality (5). This means that .
Now, let set that and do not intersect. Consider (21) and (22) together as a system of two equations with the unknown . Note that, in this system and , acquire all values from and , respectively. Under the conditions (1) and (2) of the theorem, the rank of the matrix of this system does not exceed . It is also easy to see that this system has a solution of with nonzero of all three of its components, i.e., . If we consider (20) as an equation with the unknown and put in this equation instead of , we can unambiguously find and . This will find the solution of equation (20) with nonzero of first three components. Based on this solution, we can write a system of congruences (23). Then, by the same considerations as in the previous case, we come to the conclusion that . The theorem is proved.
In this paper, a special form is established for the selected set of matrices of simple structure, which is achieved with the help of ssk.e. transformations and is called the oriented by characteristic roots reduced matrix (Proposition 4). The matrix of this special shape is lower triangular and has invariant factors on the main diagonal and some predefined properties. Propositions 1–3 and 5 and Corollary indicate the invariants of the reduced matrix. Propositions 5 and 6 set out the form of the left transform matrix, which together with some right transform (polynomial) matrix translates one reduced matrix into another within class . This matrix has an upper triangular form with a zero element in position . In some cases, it has the same first two diagonal elements. The theorem specifies the necessary and sufficient conditions under which the reduced matrices are ssk.e. Thus, Propositions 1–3 and 5, corollary, and theorem indicate a complete system of invariants of the reduced matrix with respect to ssk.e. It is clear that theorem can be applied to establish ssk.e for arbitrary matrices and (not reduced) if they belong to the set selected in this paper. Namely, and are ssk.e. if and only if their oriented by the same characteristic roots reduced matrices and satisfy the conditions of theorem. Also, as already mentioned in Section 1, everything described in this article can be extended to -matrices, which after division by the first invariant factor become matrices of simple structure. The proved theorem can be applied to the problem of classification of finite sets of numerical matrices with accuracy to similarity. In particular, let sets and of numerical (from the field) -matrices be given. On the matrices of these sets, as on the coefficients, we construct matrix polynomials and with single higher coefficients. Suppose that, after division by the first invariant factor of the polynomial matrices and obtained in this way, we have particles and of simple structure. Then, for the similarity of sets and , it is necessary and sufficient to satisfy the conditions of theorem for consolidated matrices and such that and .
The data used to support the findings of the study are given within the article as references.
Conflicts of Interest
The author declares that there are no conflicts of interest.
P. S. Kazimirskii, Factorization of Matrix Polynomials, Naukova Dumka, Kyiv, Ukraine, 1981.
P. S. Kazimirskii and V. M. Petrychkovych, “On the equivalence of polynomials matrices,” In Theoretical and Applied Problems in Algebra and Differential Equations, Naukova Dumka, Kyiv, Ukraine, 1977.View at: Google Scholar
F. R. Gantmacher, The Theory of Matrices, Chelsea Publishing Company, New York, USA, 1959.
P. Lancaster, The Theory of Matrices, Academic Press, New York, USA, 1969.
R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 2012.