International Journal of Combinatorics

Volume 2011 (2011), Article ID 539030, 29 pages

http://dx.doi.org/10.1155/2011/539030

## Zeons, Permanents, the Johnson Scheme, and Generalized Derangements

Department of Mathematics, Southern Illinois University, Carbondale, IL 62901, USA

Received 20 January 2011; Accepted 1 April 2011

Academic Editor: Alois Panholzer

Copyright © 2011 Philip Feinsilver and John McSorley. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Starting with the zero-square “zeon algebra,” the connection with permanents is shown. Permanents of submatrices of a linear combination of the identity matrix and all-ones matrix lead to moment polynomials with respect to the exponential distribution. A permanent trace formula analogous to MacMahon's master theorem is presented and applied. Connections with permutation groups acting on sets and the Johnson association scheme arise. The families of numbers appearing as matrix entries turn out to be related to interesting variations on derangements. These generalized derangements are considered in detail as an illustration of the theory.

#### 1. Introduction

Functions acting on a finite set can be conveniently expressed using matrices, whereby the composition of functions corresponds to multiplication of the matrices. Essentially, one is considering the induced action on the vector space with the elements of the set acting as a basis. This action extends to tensor powers of the vector space. One can take symmetric powers, antisymmetric powers, and so forth, that yield representations of the multiplicative semigroup of functions. An especially interesting representation occurs by taking nonreflexive, symmetric powers. Identifying the underlying set of cardinality with , the vector space has basis . The action we are interested in may be found by saying that the elements generate a “zeon algebra,” the relations being that the commute, with , . To get a feeling for this, first we recall the action on Grassmann algebra where the matrix elements of the induced action arise as determinants. For the zeon case, permanents appear.

An interesting connection with the centralizer algebra of the action of the symmetric group comes up. For the defining action on the set , represented as 0-1 permutation matrices, the centralizer algebra of matrices commuting with the entire group is generated by , the identity matrix, and , the all-ones matrix. The question was if they would help determine the centralizer algebra for the action on subsets of a fixed size, -sets, for . It is known that the basis for the centralizer algebra is given by the adjacency matrices of the Johnson scheme. Could one find this working solely with and ? The result is that by computing the “zeon powers”, that is, the action of , linear combinations of and , on -sets, the Johnson scheme appears naturally. The coefficients are polynomials in and occurring as moments of the exponential distribution. And they turn out to count derangements and related generalized derangements. The occurrence of Laguerre polynomials in the combinatorics of derangements is well known. Here, the hypergeometric function, which is closely related to Poisson-Charlier polynomials, arises rather naturally.

Here is an outline of the paper. Section 2 introduces zeons and permanents. The trace formula is proved. Connections with the centralizer algebra of the action of the symmetric group on sets are detailed. Section 3 is a study of exponential polynomials needed for the remainder of the paper. Zeon powers of are found in Section 4 where the spectra of the matrices are found via the Johnson scheme. Section 5 presents a combinatorial approach to the zeon powers of , including an interpretation of exponential moment polynomials by elementary subgraphs. In Section 6, generalized derangement numbers, specifically counting derangements and counting arrangements, are considered in detail. The Appendix has some derangement numbers and arrangement numbers for reference, as well as a page of exponential polynomials. An example expressing exponential polynomials in terms of elementary subgraphs is given there.

#### 2. Representations of Functions Acting on Sets

Let denote the vector space or . We will look at the action of a linear map on extended to quotients of tensor powers . We work with coordinates rather than vectors. First, recall the Grassmann case. To find the action on consider an algebra generated by variables satisfying . In particular, .

*Notation 2. *The standard -set will be denoted . Roman caps , , , and so forth denote subsets of . We will identify them with the corresponding ordered tuples. Generally, given an -tuple and a subset , we denote products
where the indices are in increasing order if the variables are not assumed to commute.

As an index, we will use to denote the full set .

Italic and will denote the identity matrix and all-ones matrix, respectively.

For a matrix , say, where the labels are subsets of fixed size , dictionary ordering is used. That is, convert to ordered tuples and use dictionary ordering. For example, for , , we have labels for rows one through six, respectively.

A basis for is given by products , where we consider as an ordered -tuple. Given a matrix acting on , let
with corresponding products , then the matrix has entries given by the coefficients in the expansion
where the anticommutation rules are used to order the factors in . Note that the coefficient of in is itself. And for , the coefficient of in is
We see that in general the entry of is the minor of with row labels and column labels . A standard term for the matrix is a *compound matrix*. Noting that is , in particular, yields the one-by-one matrix with entry equal to .

In this work, we will use the algebra of *zeons*, standing for “zero-ons”, or more specifically, “zero-square bosons”. That is, we assume that the variables satisfy the properties
A basis for the algebra is again given by , . At level , the induced matrix has entries according to the expansion of ,
similar to the Grassmann case. Since the variables commute, we see that the entry of is the *permanent* of the submatrix with rows and columns . In particular, . We refer to the matrix as the “ zeon power of .”

##### 2.1. Functions on the Power Set of

Note that is indexed by -sets. Suppose that represents a function . So it is a zero-one matrix with , the single entry in row if maps to . The th zeon power of is the matrix of the induced map on -sets. If maps an -set to one of lower cardinality, then the corresponding row in has all zero entries. Thus, the induced matrices in general correspond to “partial functions”.

However, if is a permutation matrix, then is a permutation matrix for all . So, given a group of permutation matrices, the map is a representation of the group.

##### 2.2. Zeon Powers of

Our main theorem computes the th zeon power of for an matrix , where and are scalar variables. Figure 1 illustrates the proof.

Theorem 2.1. *For a given matrix , for , and indices ,
*

*Proof. *Start with , where . Given , we want the coefficient of in the expansion of the product . Now,
Choose with , . A typical term of the product has the form
where , . denotes the product of terms with indices in . Expanding, we have
Thus, for a contribution to the coefficient of , we have , where . That is, and . So, the coefficient of is as stated.

##### 2.3. Trace Formula

Another main feature is the *trace formula* which shows the permanent of as the generating function for the traces of the zeon powers of . This is the zeon analog of the theorem of MacMahon for representations on symmetric tensors.

Theorem 2.2. *One has the formula
*

*Proof. *The permanent of is the entry of . Specialize in Theorem 2.1. So is any -set with , its complement in . Thus,
as required.

##### 2.4. Permutation Groups

Let be an permutation matrix. We can express in terms of the cycle decomposition of the associated permutation.

Proposition 2.3. *For a permutation matrix ,
**
where is the number of cycles of length in the cycle decomposition of the corresponding permutation.*

*Proof. *Decomposing the permutation associated to yields a decomposition into invariant subspaces of the underlying vector space . So will be the product of as runs through the corresponding cycles with the restriction of to the invariant subspace for each . So we have to check that if acts on as a cycle of length , then . For this, apply Theorem 2.2. Apart from level zero, there is only one set fixed by any , namely when . So the trace of is zero unless and then it is one. The result follows.

###### 2.4.1. Cycle Index: Orbits on -sets

Now, consider a group, , of permutation matrices. We have the cycle index each corresponding to -cycles in the cycle decomposition associated to the s. From Proposition 2.3, we have an expression in terms of permanents. Combining with the trace formula, we get the following.

Theorem 2.4. *Let be a permutation group of matrices, then one has
*

*Remark 2.5. *This result refers to three essential theorems in group theory acting on sets. Equality of the first and last expressions is the “permanent” analog of Molien's theorem, which is the case for a group acting on the symmetric tensor algebra, that the cycle index counts orbits on subsets is an instance of Polya Counting, with two colors. The last expression is followed by the Cauchy-Burnside lemma applied to the groups .

###### 2.4.2. Centralizer Algebra and Johnson Scheme

Given a group, , of permutation matrices, an important question is to determine the set (among all matrices) of matrices commuting with all of the matrices in . This is the *centralizer algebra* of the group. For the symmetric group, the only matrices are and . For the action of the symmetric group on -sets, a basis for the centralizer algebra is given by the incidence matrices for the Johnson distance. These are the same as the adjacency matrices for the Johnson (association) scheme. Recall that the Johnson distance between two -sets and is
The corresponding matrices are defined by
As it is known, [1, page 36], that a basis for the centralizer algebra is given by the orbits of the group , acting on pairs, the Johnson basis is a basis for the centralizer algebra. Since the Johnson distance is symmetric, it suffices to look at .

Now, we come to the question that is a starting point for this work. If and are the only matrices commuting with all elements (as matrices) of the symmetric group, then since the map is a homomorphism, we know that and are in the centralizer algebra of . The question is how to obtain the rest? The, perhaps surprising, answer is that in fact one can obtain the complete Johnson basis from and alone. This will be one of the main results, Theorem 4.1.

###### 2.4.3. Permanent of

First, let us consider .

Proposition 2.6. *One has the formula
*

*Proof. *For , we see directly, since all entries equal one in all submatrices, that
for all and . Taking traces,
and by the trace formula, Theorem 2.2,
Reversing the order of summation yields the result stated.

Corollary 2.7. *For varying , one will explicitly denote , then, with ,
*

The Corollary exhibits the operational formula where . By inspection, this agrees with (2.19) as well.

Observe that (2.19) can be rewritten as that is, these are “moment polynomials” for the exponential distribution with an additional scale parameter.

We proceed to examine these moment polynomials in detail.

#### 3. Exponential Polynomials

For the exponential distribution, with density on , the *moment polynomials* are defined as
The exponential embeds naturally into the family of weights of the form on as for generalized Laguerre polynomials. We define correspondingly
for nonnegative integers , introducing a factor of and a scale factor of . We refer to these as *exponential moment polynomials*.

Proposition 3.1. *Observe the following properties of the exponential moment polynomials. *(1)*The generating function **
for . *(2)*The operational formula **
where is the identity operator and . *(3)*The explicit form *

*Proof. *For the first formula, multiply the integral by and sum to get
which yields the stated result.

For the second, write
using the shift formula .

For the third, expand by the binomial theorem and integrate.

A variation we will encounter in the following is replacing the index for (3.9) and reversing the order of summation for the last line. And for future reference, consider the integral formula

##### 3.1. Hypergeometric Form

Generalized hypergeometric functions provide expressions for the exponential moment polynomials that are often convenient. In the present context, we will use functions, defined by where is the usual Pochhammer symbol. In particular, if , for example, is a negative integer, the series reduces to a polynomial. Rearranging factors in the expressions for , via (3) in Proposition 3.1, and , (3.8), we can formulate these as hypergeometric functions.

Proposition 3.2. *One has the following expressions for exponential moment polynomials:
*

#### 4. Zeon Powers of

We want to calculate , that is, the matrix with rows and columns labelled by -subsets with the entry equal to the permanent of the corresponding submatrix of . This is equivalent to the induced action of the original matrix on the th zeon space .

Theorem 4.1. *The zeon power of is given by
**
where the ’s are exponential moment polynomials.*

*Proof. *Choose and with . By Theorem 2.1, we have, using the fact that all of the entries of are equal to ,
Now, if , then , and there are subsets of satisfying the conditions of the sum. Hence the result.

Note that the specialization , , recovers (2.19).

We can write the above expansion using the hypergeometric form of the exponential moment polynomials, Proposition 3.2,

##### 4.1. Spectrum of the Johnson Matrices

Recall, for example, [2, page 220], that the spectrum of the Johnson matrices for given and is the set of numbers where the eigenvalue for given has multiplicity .

For -sets, the Johnson distance takes values from 0 to , with taking values from that same range.

##### 4.2. The Spectrum of

Recall that as the Johnson matrices are symmetric and generate a commutative algebra, they are simultaneously diagonalizable by an orthogonal transformation of the underlying vector space. Diagonalizing the equation in Theorem 4.1, we see that the spectrum of is given by

Proposition 4.2. *The spectrum of is given by
**
for , with respective multiplicities .*

*Proof. *In the sum over in (4.4), only the last two factors involve . We have
using the binomial theorem to sum out . Filling in the additional factors yields
Taking out a denominator factor of and multiplying by gives
which is precisely as in the third statement of Proposition 3.1.

As in Proposition 3.2, we can express the eigenvalues as follows.

Corollary 4.3. *The spectrum of consists of the eigenvalues
**
for , with corresponding multiplicities as indicated above.*

##### 4.3. Row Sums and Trace Identity

For the row sums, we know that the all-ones vector is a common eigenvector of the Johnson basis corresponding to . These are the valencies . For the Johnson scheme, we have for example, see [2, page 219], which can be checked directly from the formula for , (4.4), with set to zero. Setting in Proposition 4.2 gives for the row sums of .

###### 4.3.1. Trace Identity

Terms on the diagonal are the coefficient of , which is the identity matrix. So, the trace is Cancelling factorials and reversing the order of summation on yields the following formula.

Now, Proposition 4.2 gives the trace

Proposition 4.4. *Equating the above expressions for the trace yields the identity
*

*Example 4.5. *For , , we have
One can check that the entries are in agreement with Theorem 4.1. The trace is . The spectrum is
and the trace can be verified from these as well.

*Remark 4.6. *What is interesting is that these matrices have polynomial entries with all eigenvalues polynomials as well, and furthermore, the exact same set of polynomials produces the eigenvalues as well as the entries. Specializing and to integers, a similar statement holds. All of these matrices will have integer entries with integer eigenvalues, all of which belong to closely related families of numbers. We will examine interesting cases of this phenomenon later on in this paper.

#### 5. Permanents from

Here, we present a proof via recursion of the subpermanents of , thereby recovering Theorem 4.1 from a different perspective.

*Remark 5.1. *For the remainder of this paper, we will work with an matrix corresponding to an submatrix of the above discussion. Here, we have blown up the submatrix to full size as the object of consideration.

Let denote the matrix with entries equal to on the main diagonal, and ’s elsewhere. Note that and , where and are . Define to be the permanent of .

For , define , and, recalling (2.19), We have also for of order . These agree at .

Theorem 5.2. *For , , one has the recurrence
*

*Proof. *We have so , that is, the matrix contains at least 1 entry on its main diagonal equal to . Write the block form
with the row vector of all s, and is its transpose. Now, compute the permanent of expanding along the first row. We get
where is the contribution to involving . Now,
Thus, from (5.5),
and hence the result.

We arrange the polynomials in a triangle, with the columns labelled by and rows by , starting with at the top vertex The recurrence says that to get the entry, you combine elements in column in rows and , forming an L-shape. Thus, given the first column , the table can be generated in full.

Now, we check that these are indeed our exponential moment polynomials. Additionally, we derive an expression for in terms of the initial sequence . For clarity, we will explicitly denote the dependence of on .

Theorem 5.3. *For , one has *(1)*the permanent of the matrix with entries on the diagonal equal to and all other entries equal to is*(2)*(3)** the complementary sum is *

*Proof. *The initial sequence as noted in (5.2). We check that satisfies recurrence (5.3). Starting from the integral representation for , (3.2), we have
as required, where we now identify , , and . And (3.10) gives an explicit form for .

For (2), starting with the integral representation for , we get
as required. The proof for (3) is similar, using (3.12),
and the binomial theorem for the sum.

##### 5.1. Revisited

Now, we have an alternative proof of Theorem 4.1.

Lemma 5.4. *Let and be -subsets of with , then
*

*Proof. *Now, , so the submatrix is permutationally equivalent to the matrix with entries on its main diagonal and s elsewhere, that is, to the matrix . Hence, by definition of , (5.1), we have the result.

Thus, the expansion in the Johnson basis is

*Proof. *Let I and J be -subsets of with Johnson distance . By definition, the IJ entry of the LHS of (5.16) equals the permanent of the submatrix from rows and columns , , by Lemma 5.4 and Theorem 5.3(1). Now, on the RHS of (5.16), if , the only nonzero contribution comes from the term. This yields as required.

##### 5.2. Elementary Subgraphs and Permanents

There is an approach to permanents of via elementary subgraphs, based on that of Biggs [3] for determinants.

An * elementary subgraph* (see [3, page 44]) of a graph is a spanning subgraph of all of whose components are 0, 1, or 2 regular, that is, all of whose components are isolated vertices, isolated edges, or cycles of length .

Let be a copy of the complete graph with vertex set in which the first vertices are * distinguished*. We may now consider the matrix as the * weighted* adjacency matrix of in which the weights of the distinguished vertices are , with all undistinguished vertices and all edges assigned a weight of .

Let be an elementary subgraph of , then we describe as having distinguished isolated vertices and cycles. The weight of , , is defined as a homogeneous polynomial of degree .

This leads to an interpretation/derivation of as the permanent .

Theorem 5.5. *One has the expansion in elementary subgraphs
*

*Proof. *Assign weights to the components of as follows:

each distinguished isolated vertex will have weight ;

each undistinguished isolated vertex will have weight ;

each isolated edge will have weight ;

and each -cycle, , will have weight .

To obtain in agreement with (5.17), we form the product of these weights over all components in . The proof then follows along the lines of Proposition 7.2 of [3, page 44], slightly modified to incorporate isolated vertices and with determinant, “,” replaced by permanent, “,” ignoring the minus signs. Effectively, each term in the permanent expansion thus corresponds to a weighted elementary subgraph of the weighted .

See Figure 2 for an example with .

##### 5.3. Associated Polynomials and Some Asymptotics

Thinking of and as parameters, we define the *associated polynomials *
As in the proof of (3) above, using the integral formula (3.12), we have
Comparing with (5.2), we have the following.

Proposition 5.6. *Consider
*

And one has the following.

Proposition 5.7. *As , for ,
**
with the special cases
*

*Proof. *From (5.20),
from which the result follows.

#### 6. Generalized Derangement Numbers

The formula (2.19) is suggestive of the derangement numbers (see, e.g., [4, page 180]), This leads to the following.

*Definition 6.1. *A family of numbers, depending on and , arising as the values of when and are assigned fixed integer values, are called *generalized derangement numbers*.

We have seen that the assignment produces the usual derangement numbers when . In this section, we will examine in detail the cases , generalized *derangements*, and , generalized *arrangements*.

*Remark 6.2. *Topics related to this material are discussed in Riordan [5]. The paper [6] is of related interest as well.

##### 6.1. Generalized Derangements of

To start, define Equation (5.9) and Proposition 3.2 give Equation (5.2) reads as the number derangements of . So we have a combinatorial interpretation of .

###### 6.1.1. Combinatorial Interpretation of

We now give a combinatorial interpretation of for .

When , recurrence (5.3) for gives
We say that a subset of is *deranged* by a permutation if no point of is fixed by the permutation.

Proposition 6.3. *, the number of derangements of . In general, for , is the number of permutations of in which the set is deranged, with no restrictions on the -set .*

*Proof. *For , let denote the set of permutations in the statement of the proposition. Let . We claim that .

The case is immediate. We show that satisfies recurrence (6.5).

Now, let . Consider a permutation in . The point is either (1) deranged, or (2) not deranged (i.e., fixed). (1)If is deranged, then the -set is deranged. By switching in all permutations of , we obtain a permutation in . Conversely, given any permutation of , we switch to obtain a permutation in where is deranged. Hence, the number of permutations in with deranged equals . (2)Here, is fixed, so if we remove from any permutation in we obtain a permutation in . Conversely, given a permutation in , we may include as a fixed point to obtain a permutation in with fixed. Hence, the number of permutations in with fixed equals . Combining the above two paragraphs shows that satisfies recurrence (6.5).

And a quick check, there being no restrictions at all in the combinatorial interpretation, in agreement with (6.3) for .

*Example 6.4. *When , we have corresponding to the 2 permutations of [1] in which is moved: .

Then, corresponding to the 3 permutations of [1] in which is moved: .

Then, corresponding to the 4 permutations of [1] in which is moved: .

Finally, corresponding to the 3 permutations of [1] in which is moved: .

Reversing the order of summation in (6.3) gives an alternative expression

*Remark 6.5. *Formulation (6.7) may be proved directly by inclusion-exclusion on permutations fixing given points.

*Example 6.6. *Consider

Now, from (2) of Theorem 5.3, , and , we have Here is a combinatorial explanation. To obtain a permutation in , we first choose points from to be fixed. Then, every derangement of the remaining points will produce a permutation in , and there are of such derangements.

*Example 6.7. *Consider

###### 6.1.2. Permanents from

Theorem 4.1 specializes to This can be written using the hypergeometric form with spectrum by Corollary 4.3 and Proposition 4.2.

The entries of are from the set of numbers . For the spectrum, start with . From (6.3), we have As increases, we see that the spectrum consists of the numbers Think of moving in the derangement triangle, as in the appendix, starting from position , rescaling the values by the factorial of the column at each step, then the eigenvalues are found by successive knight's moves, up 2 rows and one column to the left, with alternating signs.

*Example 6.8. *For , , we have
with characteristic polynomial

*Remark 6.9. *Except for , the coefficients in the expansion of in the Johnson basis will be distinct. Thus, the Johnson basis itself can be read off directly from . In this sense, the centralizer algebra of the action of the symmetric group on -sets is determined by knowledge of the action of just on -sets.

##### 6.2. Generalized Arrangements of

Given , , a *-arrangement* of is a permutation of a -subset of . The number of -arrangements of is
Note that there is a single 0-arrangement of , from the empty set.

Define . So, similar to the case for derangements, (5.9) gives
Now, define , so
is the* total* number of -arrangements of for . Thus, we have a combinatorial interpretation of .

###### 6.2.1. Combinatorial Interpretation of

We now give a combinatorial interpretation of for .

When , recurrence (5.3) for gives

Proposition 6.10. *, the total number of arrangements of . In general, for , is the number of arrangements of which contain .*

*Proof. *For , let denote the set of arrangements of which contain . With , we note that is the set of all arrangements. Let . We claim that .

The initial values with are immediate. We show that satisfies recurrence (6.21).

Consider . Let , so is an arrangement of containing . If , then is any arrangement. Now, either or .

If , then , and so the number of arrangements in which contain equals .

If , then by subtracting 1 from all parts of which are , we obtain an arrangement of which contains , that is, an arrangement in . Conversely, given an arrangement in , adding 1 to all parts yields an arrangement in which does not contain . Hence, the number of arrangements in which do not contain equals .

We conclude that ; hence, This is the result.

*Example 6.11. *When , we have corresponding to the 16 arrangements of [1]: .

Then, corresponding to the 11 arrangements of [1] which contain : .

Then, corresponding to the 8 arrangements of [1] which contain : .

Finally, corresponding to the 6 arrangements of [1] which contain : .

Rearranging the factors in (5.9), we have With , this gives Here is a combinatorial explanation of (6.23).

For any , to obtain a -arrangement of containing , we may place the points of into these positions in ways. Then, the remaining positions in can be filled in by a -arrangement of the unused points in ways.

*Example 6.12. *Consider
Finally, from (2) of Theorem 5.3, , and , we have

*Example 6.13. *Consider

###### 6.2.2. Permanents from

Theorem 4.1 specializes to This can be written using the hypergeometric form with spectrum by Corollary 4.3 and Proposition 4.2.

*Example 6.14. *For , , we have
with characteristic polynomial
As for the case of derangements, the Johnson basis can be read off directly from the matrix .

#### Appendix

*Generalized Derangement Numbers and Integer Sequences*

The first two columns of the triangle, and , give sequences A000166 and A000255 in the On-Line Encyclopedia of Integer Sequences [7]. The comments for A000255 do not contain our combinatorial interpretation.

The first two columns of the triangle, and , give sequences A000522 and A001339. The comments contain our combinatorial interpretation. The next two columns, and , give sequences A001340 and A00134; here, our combinatorial interpretation is not mentioned in the comments.

*Generalized Derangement Triangles*

is the leftmost column. The rows correspond to from 0 to 9. *Values of *

*Values of *

*Exponential polynomials *

Note that, as is common for matrix indexing, we have dropped the commas in the numerical subscripts