Abstract

Let be a class of Hankel matrices whose entries, depending on a given matrix , are linear forms in variables with coefficients in a finite field . For every matrix in , it is shown that the varieties specified by the leading minors of orders from 1 to have the same number of points in . Further properties are derived, which show that sets of varieties, tied to a given Hankel matrix, resemble a set of hyperplanes as regards the number of points of their intersections.

1. Introduction

The representation of hypersurfaces of small degree as determinants is a classical subject. For instance, Hesse [1] discussed the representation of the plane quartic by symmetric determinants, and many different problems have been tackled over the years; see, for example, [2, 3]. An important question, when hypersurfaces are defined over finite fields, is the computation of the number of points. In general this is very difficult, for example, [4], and most frequently only bounds are given. This paper considers hypersurfaces over finite fields, which are defined by determinants of Hankel matrices whose entries are linear forms in the variables. These Hankel matrices are encountered in the proof of certain properties of finite state automata whose state change is governed by tridiagonal matrices [5, 6]. They also occur in the study of some decoding algorithms for error-correcting codes [7, 8].

It is remarkable that, for these determinantal varieties, the exact number of points can in many instances be explicitly found, in terms of the size of the field and the number of variables.

Let be an irreducible polynomial of degree over with root , which is thus an eigenvalue of the companion matrix which is assumed to have the coefficients of in the last column, all s in the first subdiagonal, and the remaining entries are s [9].

The definition of Hankel matrices that we are dealing with uses the Krylov matrices where is a row vector of independent variables and is a column vector of independent variables. Every Krylov matrix is nonsingular unless and are all-zero vectors, as will be proved later.

Definition 1. The class consists of matrices defined as These are Hankel matrices, because the entries are clearly the same whenever the index sum is constant. When the vector is a fixed element of , the corresponding subclass of is denoted by .

Given a polynomial in the ring , the variety is defined as the set of points in the affine space that annihilate ; that is,

More generally, given polynomials the variety is the set of solutions of the system

Note that is the intersection and is the union of the varieties .

The entries in are bilinear forms of the entries in and . Let denote the leading minor of order of a given Hankel matrix obtained fixing , and define the determinantal varieties as . Then, we prove that every polynomial is irreducible over (Proposition 10), and obtain the following general result.

Theorem 2. We have if and .

While proving this theorem, the cardinality of certain subsets is also computed. The sets are the zero-loci of all ’s with (Theorem 18). That is, every is specified by equations of degree higher than ; nevertheless its cardinality is the same as in the case of the intersections in of distinct hyperplanes. In the next section, preliminary notions, properties, and useful lemmas are collected, while the main results are proved in Section 3.

2. Preliminaries

It is direct to check that is a row eigenvector of , associated with the eigenvalue ; that is, .

Let denote the -Frobenius; that is, set for all . The action of is extended to vectors and matrices component-wise. Since , because the entries of this matrix are in , we have that is, all eigenvectors of are conjugate vectors under . Hence the matrix reduces to diagonal form over ; that is, being the diagonal matrix of the eigenvalues of .

Observe that, writing (8) as , the columns of are column eigenvectors of . Thus there is a column vector that allows us to write in the form

The following lemma is useful to show that every matrix similar to gives the same class . Let denote the general linear group of × nonsingular matrices with entries in .

Lemma 3. Matrices of that have the same characteristic irreducible polynomial are -similar.

Proof. Let be a root of . To prove the lemma it is sufficient to show that any two matrices and of , having the same characteristic polynomial , are similar. The previous arguments indicate that there are two -matrices and of form (7) such that
Multiplying the first equation by on the left, and by on the right, we have . Thus, the lemma is proved by showing that is a -matrix. Since we may always assume that where is a convenient column eigenvector of and is of form (7), we have which is patently invariant under the action of the automorphism ; thus is a -matrix.

Corollary 4. and are -similar.

2.1. and

The equation defines an -linear mapping from into . Taking the vector to be the element of we have . The image is the -linear span of ; hence it is a one-dimensional -linear subspace of .

Equation (8) implies that ; then, introducing the vector it is immediate to see that for every , whenever . The linear forms are transformed into linear forms , and matrix can be written as

Definition 5. Let denote the leading minor of order of a given Hankel matrix in . When there is no ambiguity surrounding the variables, this minor is in brief denoted by . The determinant of is , or .

Lemma 6. Let be a vector of variables, and let be a constant vector in ; then we have the following. (1)The determinant of is zero if and only if all variables are set equal to zero.(2)The matrix is a linear combination of nonsingular matrices, the coefficients of the linear combination being the entries of .(3)Any linear combination of the rows of is a set of linearly independent linear forms.

Proof. is not zero, because it is the product of two determinants that are different from zero In particular, if and only if is the all-zero vector; the same observation holds for . This proves point (1).
Point (2) is proved by writing where the matrices have constant entries that depend on , and taking and for every , that is, . When , we have This implies that .
Point (3) is proved by noting that has only one solution, namely, , and identifies linear combinations of the rows of . It follows that every linear combination of the rows should have only the all-zero solution; therefore the entries in every row must be linearly independent, by a theorem of Rouché-Capelli.

By correspondence (14), every is transformed into a polynomial in the variables s with the coefficients in .

2.2. Auxiliary Results

Let be a Vandermonde determinant of order identified by the -tuple .

Definition 7. For every triple of integers , , and such that , the subset of is defined as

Definition 8. The set is defined to be the collection of subsets, where each subset consists of the unordered collection of distinct integers from the set .
Every subset defines a mapping from the set into .

Lemma 9. Consider a Hankel matrix , as defined in (2) with ; the leading minors , are multivariate homogeneous polynomials of degree , which may be written over , in the form where the summation is extended to all combinations of the integers , taking at a time, and the coefficients of the monomials are squares of Vandermonde determinants.

Proof. In matrix (15), the bilinear forms have the explicit expression where is the row index and is the column index. Each column is a linear combination of columns with coefficients such that all columns with the same coefficient are proportional. Matrix (15) can be written as a sum of the form where is the matrix which has rank , since every row is proportional to the first row, and the same holds for the columns. The leading minor is computed by writing the determinant as a sum of determinants, which contain a single variable in every column, determinants with repeated variables are , because of the previous observation that their corresponding columns are proportional, and in the remaining determinants the corresponding variable is collected from each column.
The coefficient of the monomial is obtained as follows. Let . Then the coefficient of is equal to Collecting the common factor, the remaining summation is exactly the same determinant; thus we have which gives with the summation extended to every subset of , and this concludes the proof.

Proposition 10. The product is not identically zero over .
Furthermore, the leading minors , , are irreducible degree- polynomials over .

Proof. As a consequence of (26), every is irreducible over . Further, observing that each variable occurs at degree in any , it has maximum degree in the product polynomial . Therefore is not identically zero in , because is certainly less than for any .
To prove the second statement, fix . In this step it is checked that is irreducible over . It is only necessary to use the fact that is a homogeneous polynomial of degree which is irreducible over . Assume that is reducible over and call an irreducible factor of minimal degree . Let be the minimal extension of in which is defined. Since is irreducible, the polynomials , , obtained by applying the Frobenius to are nonproportional irreducible factors of . Hence , which is a contradiction.

Remark 11. The determinant is found to be where is the discriminant of , and the product involving can be seen as a norm in the field ; therefore is irreducible over .

Lemma 12. The variety has cardinality over .

Proof. Equation (17) shows that any is a polynomial of degree with coefficients in ; furthermore, every entry is a linear form with coefficients in . Hence implies ; in turn implies , given that and arguing recursively; finally, implies , while the variable is free and may assume values. The conclusion follows.

Lemma 13. Let , , and assume . Then .

Proof. We use induction on , the case being obvious. The inductive assumption in gives . Fix with for all . Since , for all there is a unique such that . Take . This completes the proof.

3. Main Results

Proposition 14. The equality holds for every .

Proof. In the proof of Lemma 9, it was shown that with . Further, it was noted in that lemma that for every , whenever every .
The relation establishes a one-to-one correspondence between and a subspace of dimension of , and further . There thus exists a one-to-one correspondence between the zeros of in and the zeros of in the one-dimensional subspace of , which is the image of . Referring to (26), which yields the representation of , assuming , considering the change of variables and recalling that the coefficients of the monomials are squares of Vandermonde determinants , we obtain where is the discriminant of the polynomial with root . The variety is obtained by considering and the other variables as , ; thus only when every , ; further . Finally, we have the chain of bijections In conclusion, this equation shows an explicit one-to-one mapping between the zeros of and the zeros of , which implies .

In the following example, the procedure for obtaining a point of from a point of is explicitly illustrated.

Example 15. Consider the irreducible polynomial of degree over with the transpose companion matrix Taking , and , the Hankel matrix (17) becomes where The forms and of degrees and , respectively, are Given a point which is a zero of , a zero of is obtained as follows.
Compute the vector whose first component is , and the remaining entries are obtained as , ; then compute and construct the vector . Finally, a zero of is obtained as where is the matrix whose columns are the eigenvectors of in

Remark 16. Since the forms in the first row of (17) are linearly independent, by Lemma 6, a change of variables from to takes a matrix to the form where the variables s are free, and every is a linear form in the s.
Fix the integers and , and let denote the × determinant of a Hankel matrix with free variable entries , And set by definition.

Proposition 17. Let be natural integers. Let be a Hankel matrix with first row . Let be the set of points with the same first coordinates , such that the minor , and the minors for all . Then has cardinality .

Proof. Observe that the first row of the Hankel matrix completely specifies the leading Hankel submatrix , and consequently also every minor for .
Let denote the th row of . Let be the subset of consisting of all such that is linearly dependent on , .
The case is easily settled. Consider the identity for some , and take ; it follows that for a unique because by hypothesis. Since is any element (i.e., it may assume values in , while is uniquely specified), the assertion is proved.
Now, assume , and note that row is uniquely determined up to position as a linear combination of the above rows up to the same position . Extend this linear combination to uniquely determine the remaining elements of the Hankel matrix.
The assertion is a consequence of the following claims.
Claim 1. One has .
Consider a vector , which belongs to if and only if there are , , such that since , and this same condition implies that the coefficients are uniquely determined by the entries of the vector and by the entry in row .
We know that for each there is a unique such that .
Fix and hence fix , and . The values of , are uniquely specified by the linear combination condition, jointly with the Hankel matrix properties. Since the remaining , are free, the cardinality of is precisely .
Claim 2. Since the first rows of the Hankel matrix are linearly dependent, it follows that for every .
To conclude the proof, it remains to show that the th row, constructed as above, is the only possible th row that leads to a Hankel matrix satisfying the hypotheses of the proposition. This property is the third claim.
Claim 3. If , for every , then for every .
Let be the smallest integer such that the Hankel matrix , with leading minor , has the whole th row that is not a linear combination of the above rows: this means that the entry is different from .
Let be the coefficients of the linear combination of the first rows of yielding the row .
From every th row of the matrix , with , the linear combination of the first rows may be subtracted to get a row whose entries with index are zero for every . The counter-diagonal entries between row and the bottom row are . The determinant of and that of the modified matrix are the same; using the generalized Laplace formula for the expansion of a determinant with respect to the last rows, we get .
The contradiction forces , which concludes the proof.

Theorem 18. For all integers , , and such that we have

Proof. For all integers we have , because in each determinant , , the variables , , do not occur; hence Lemma 12 gives the case for all . We may thus assume . Induction will be applied to , the case being obvious. The inductive assumption gives for all . Notice that . Lemma 9 gives . Hence

Remark 19. Take integers , , and such that . Applying Theorem 18, first for and then for , gives .

Proof of Theorem 2. We know by Lemma 6 that , for every ; the proof is completed by showing that for every . This is true by the case of Theorem 18. has only one point, because is an irreducible polynomial over .

Corollary 20. Given , if , the varieties , have cardinality .

Proof. Performing the substitution gives the equation . By hypothesis , the equation always has a solution in , since we have with mod . Thus all varieties with have the same cardinality, say , and the equation implies .

Note that, when has some factor in common with , the cardinalities of are close to but depend on . It is an interesting problem to determine how close these cardinalities are to .

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.