About this Journal Submit a Manuscript Table of Contents
International Journal of Combinatorics
Volume 2011 (2011), Article ID 539030, 29 pages
http://dx.doi.org/10.1155/2011/539030
Research Article

Zeons, Permanents, the Johnson Scheme, and Generalized Derangements

Department of Mathematics, Southern Illinois University, Carbondale, IL 62901, USA

Received 20 January 2011; Accepted 1 April 2011

Academic Editor: Alois Panholzer

Copyright © 2011 Philip Feinsilver and John McSorley. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Starting with the zero-square “zeon algebra,” the connection with permanents is shown. Permanents of submatrices of a linear combination of the identity matrix and all-ones matrix lead to moment polynomials with respect to the exponential distribution. A permanent trace formula analogous to MacMahon's master theorem is presented and applied. Connections with permutation groups acting on sets and the Johnson association scheme arise. The families of numbers appearing as matrix entries turn out to be related to interesting variations on derangements. These generalized derangements are considered in detail as an illustration of the theory.

1. Introduction

Functions acting on a finite set can be conveniently expressed using matrices, whereby the composition of functions corresponds to multiplication of the matrices. Essentially, one is considering the induced action on the vector space with the elements of the set acting as a basis. This action extends to tensor powers of the vector space. One can take symmetric powers, antisymmetric powers, and so forth, that yield representations of the multiplicative semigroup of functions. An especially interesting representation occurs by taking nonreflexive, symmetric powers. Identifying the underlying set of cardinality 𝑛 with {1,2,,𝑛}, the vector space has basis 𝑒1,𝑒2,. The action we are interested in may be found by saying that the elements 𝑒𝑖 generate a “zeon algebra,” the relations being that the 𝑒𝑖 commute, with 𝑒2𝑖=0, 1𝑖𝑛. To get a feeling for this, first we recall the action on Grassmann algebra where the matrix elements of the induced action arise as determinants. For the zeon case, permanents appear.

An interesting connection with the centralizer algebra of the action of the symmetric group comes up. For the defining action on the set {1,,𝑛}, represented as 0-1 permutation matrices, the centralizer algebra of 𝑛×𝑛 matrices commuting with the entire group is generated by 𝐼, the identity matrix, and 𝐽, the all-ones matrix. The question was if they would help determine the centralizer algebra for the action on subsets of a fixed size, -sets, for >1. It is known that the basis for the centralizer algebra is given by the adjacency matrices of the Johnson scheme. Could one find this working solely with 𝐼 and 𝐽? The result is that by computing the “zeon powers”, that is, the action of 𝑠𝐼+𝑡𝐽, linear combinations of 𝐼 and 𝐽, on -sets, the Johnson scheme appears naturally. The coefficients are polynomials in 𝑠 and 𝑡 occurring as moments of the exponential distribution. And they turn out to count derangements and related generalized derangements. The occurrence of Laguerre polynomials in the combinatorics of derangements is well known. Here, the 2𝐹1 hypergeometric function, which is closely related to Poisson-Charlier polynomials, arises rather naturally.

Here is an outline of the paper. Section 2 introduces zeons and permanents. The trace formula is proved. Connections with the centralizer algebra of the action of the symmetric group on sets are detailed. Section 3 is a study of exponential polynomials needed for the remainder of the paper. Zeon powers of 𝑠𝐼+𝑡𝐽 are found in Section 4 where the spectra of the matrices are found via the Johnson scheme. Section 5 presents a combinatorial approach to the zeon powers of 𝑠𝐼+𝑡𝐽, including an interpretation of exponential moment polynomials by elementary subgraphs. In Section 6, generalized derangement numbers, specifically counting derangements and counting arrangements, are considered in detail. The Appendix has some derangement numbers and arrangement numbers for reference, as well as a page of exponential polynomials. An example expressing exponential polynomials in terms of elementary subgraphs is given there.

2. Representations of Functions Acting on Sets

Let 𝒱 denote the vector space 𝑛 or 𝑛. We will look at the action of a linear map on 𝒱 extended to quotients of tensor powers 𝒱. We work with coordinates rather than vectors. First, recall the Grassmann case. To find the action on 𝒱 consider an algebra generated by 𝑛 variables 𝑒𝑖 satisfying 𝑒𝑖𝑒𝑗=𝑒𝑗𝑒𝑖. In particular, 𝑒2𝑖=0.

Notation 2. The standard 𝑛-set {1,,𝑛} will be denoted [𝑛]. Roman caps I, J, A, and so forth denote subsets of [𝑛]. We will identify them with the corresponding ordered tuples. Generally, given an 𝑛-tuple (𝑥1,,𝑥𝑛) and a subset I[𝑛], we denote products 𝑥I=𝑗I𝑥𝑗,(2.1) where the indices are in increasing order if the variables are not assumed to commute.
As an index, we will use U to denote the full set [𝑛].
Italic 𝐼 and 𝐽 will denote the identity matrix and all-ones matrix, respectively.

For a matrix 𝑋IJ, say, where the labels are subsets of fixed size 𝑙, dictionary ordering is used. That is, convert to ordered tuples and use dictionary ordering. For example, for 𝑛=4, 𝑙=2, we have labels 12,13,14,23,24,and34 for rows one through six, respectively.

A basis for 𝒱 is given by products 𝑒I=𝑒𝑖1𝑒𝑖2𝑒𝑖,(2.2)I[𝑛], where we consider I as an ordered -tuple. Given a matrix 𝑋 acting on 𝒱, let 𝑦𝑖=𝑗𝑋𝑖𝑗𝑒𝑗,(2.3) with corresponding products 𝑦I, then the matrix 𝑋 has entries given by the coefficients in the expansion 𝑦I=J𝑋IJ𝑒J(2.4) where the anticommutation rules are used to order the factors in 𝑒J. Note that the coefficient of 𝑒𝑗 in 𝑦𝑖 is 𝑋𝑖𝑗 itself. And for 𝑛>3, the coefficient of 𝑒34 in 𝑦12 is 𝑋det13𝑋14𝑋23𝑋24.(2.5) We see that in general the IJ entry of 𝑋 is the minor of 𝑋 with row labels I and column labels J. A standard term for the matrix 𝑋 is a compound matrix. Noting that 𝑋 is (𝑛)×(𝑛), in particular, =𝑛 yields the one-by-one matrix with entry equal to (𝑋𝑛)UU=det𝑋.

In this work, we will use the algebra of zeons, standing for “zero-ons”, or more specifically, “zero-square bosons”. That is, we assume that the variables 𝑒𝑖 satisfy the properties 𝑒𝑖𝑒𝑗=𝑒𝑗𝑒𝑖,𝑒2𝑖=0.(2.6) A basis for the algebra is again given by 𝑒I, I[𝑛]. At level , the induced matrix 𝑋 has IJ entries according to the expansion of 𝑦I,𝑦𝑖=𝑗𝑋𝑖𝑗𝑒𝑗𝑦𝐼=𝐽𝑋IJ𝑒J(2.7) similar to the Grassmann case. Since the variables commute, we see that the IJ entry of 𝑋 is the permanent of the submatrix with rows I and columns J. In particular, (𝑋𝑛)UU=per𝑋. We refer to the matrix 𝑋 as the “th zeon power of 𝑋.”

2.1. Functions on the Power Set of [𝑛]

Note that 𝑋 is indexed by -sets. Suppose that 𝑋𝑓 represents a function 𝑓[𝑛][𝑛]. So it is a zero-one matrix with (𝑋𝑓)𝑖𝑗=1, the single entry in row 𝑖 if 𝑓 maps 𝑖 to 𝑗. The th zeon power of 𝑋 is the matrix of the induced map on -sets. If 𝑓 maps an -set I to one of lower cardinality, then the corresponding row in 𝑋 has all zero entries. Thus, the induced matrices in general correspond to “partial functions”.

However, if 𝑋 is a permutation matrix, then 𝑋 is a permutation matrix for all 0𝑛. So, given a group of permutation matrices, the map 𝑋𝑋 is a representation of the group.

2.2. Zeon Powers of 𝑠𝐼+𝑡𝑋

Our main theorem computes the th zeon power of 𝑠𝐼+𝑡𝑋 for an 𝑛×𝑛 matrix 𝑋, where 𝑠 and 𝑡 are scalar variables. Figure 1 illustrates the proof.

539030.fig.001
Figure 1: Configuration of sets: B=IA, C=JA.

Theorem 2.1. For a given matrix 𝑋, for 0𝑛, and indices |I|=|J|=, (𝑠𝐼+𝑡𝑋)IJ=0𝑗𝑠𝑗𝑡𝑗𝐴IJ||A||=𝑗𝑋𝑗IA,JA.(2.8)

Proof. Start with 𝑦𝑖=𝑠𝑒𝑖+𝑡𝜉𝑖, where 𝜉𝑖=𝑗𝑋𝑖𝑗𝑒𝑗. Given I=(𝑖1,,𝑖), we want the coefficient of 𝑒J in the expansion of the product 𝑦𝐼=𝑦𝑖1𝑦𝑖. Now, 𝑦𝐼=𝑠𝑒𝑖1+𝑡𝜉𝑖1𝑠𝑒𝑖+𝑡𝜉𝑖.(2.9) Choose AI with |A|=𝑗, 0𝑗. A typical term of the product has the form 𝑠𝑗𝑡𝑗𝑒A𝜉B(2.10) where AB=, B=IA. 𝜉B denotes the product of terms 𝜉i with indices in B. Expanding, we have 𝜉B=𝐶𝑋𝑗BC𝑒C,e𝐴𝜉B=𝐶𝑋𝑗BC𝑒A𝑒C.(2.11) Thus, for a contribution to the coefficient of 𝑒J, we have AC=J, where AC=. That is, C=JA and AIJ. So, the coefficient of 𝑠𝑗𝑡𝑗 is as stated.

2.3. Trace Formula

Another main feature is the trace formula which shows the permanent of 𝐼+𝑡𝑋 as the generating function for the traces of the zeon powers of 𝑋. This is the zeon analog of the theorem of MacMahon for representations on symmetric tensors.

Theorem 2.2. One has the formula per(𝑠𝐼+𝑡𝑋)=𝑗𝑠𝑛𝑗𝑡𝑗tr𝑋𝑗.(2.12)

Proof. The permanent of 𝑠𝐼+𝑡𝑋 is the UU entry of (𝑠𝐼+𝑡𝑋)𝑛. Specialize I=J=U in Theorem 2.1. So A is any (𝑛𝑗)-set with I𝐴=J𝐴=𝐴, its complement in [𝑛]. Thus, per(𝑠𝐼+𝑡𝑋)=(𝑠𝐼+𝑡𝑋)𝑛UU=0𝑗𝑛𝑠𝑛𝑗𝑡𝑗||A||=𝑛𝑗𝑋𝑗A,A=0𝑗𝑛𝑠𝑛𝑗𝑡𝑗tr𝑋𝑗,(2.13) as required.

2.4. Permutation Groups

Let 𝑋 be an 𝑛×𝑛 permutation matrix. We can express per(𝐼+𝑡𝑋) in terms of the cycle decomposition of the associated permutation.

Proposition 2.3. For a permutation matrix X, per(𝐼+𝑡𝑋)=0𝑛1+𝑡𝑛𝑋(),(2.14) where 𝑛𝑋() is the number of cycles of length in the cycle decomposition of the corresponding permutation.

Proof. Decomposing the permutation associated to 𝑋 yields a decomposition into invariant subspaces of the underlying vector space 𝒱. So per(𝐼+𝑡𝑋) will be the product of per(𝐼+𝑡𝑋𝑐) as 𝑐 runs through the corresponding cycles with 𝑋𝑐 the restriction of 𝑋 to the invariant subspace for each 𝑐. So we have to check that if 𝑋 acts on 𝒱 as a cycle of length , then per(𝐼+𝑡𝑋)=1+𝑡. For this, apply Theorem 2.2. Apart from level zero, there is only one set fixed by any 𝑋𝑗, namely when 𝑗=. So the trace of 𝑋𝑗 is zero unless 𝑗= and then it is one. The result follows.

2.4.1. Cycle Index: Orbits on -sets

Now, consider a group, 𝐺, of permutation matrices. We have the cycle index 𝑍𝐺𝑧1,𝑧2,,𝑧𝑛=1||𝐺||𝑋𝐺𝑧𝑛𝑋1(1)𝑧𝑛𝑋2(2)𝑧𝑛𝑋𝑛(𝑛),(2.15) each 𝑧 corresponding to -cycles in the cycle decomposition associated to the 𝑋s. From Proposition 2.3, we have an expression in terms of permanents. Combining with the trace formula, we get the following.

Theorem 2.4. Let 𝐺 be a permutation group of matrices, then one has 1||𝐺||𝑋𝐺per(𝐼+𝑡𝑋)=𝑍𝐺1+𝑡,1+𝑡2,,1+𝑡𝑛=𝑡#(orbitson𝑙-sets).(2.16)

Remark 2.5. This result refers to three essential theorems in group theory acting on sets. Equality of the first and last expressions is the “permanent” analog of Molien's theorem, which is the case for a group acting on the symmetric tensor algebra, that the cycle index counts orbits on subsets is an instance of Polya Counting, with two colors. The last expression is followed by the Cauchy-Burnside lemma applied to the groups 𝐺={𝑋}𝑋𝐺.

2.4.2. Centralizer Algebra and Johnson Scheme

Given a group, 𝐺, of permutation matrices, an important question is to determine the set (among all matrices) of matrices commuting with all of the matrices in 𝐺. This is the centralizer algebra of the group. For the symmetric group, the only matrices are 𝐼 and 𝐽. For the action of the symmetric group on -sets, a basis for the centralizer algebra is given by the incidence matrices for the Johnson distance. These are the same as the adjacency matrices for the Johnson (association) scheme. Recall that the Johnson distance between two -sets I and J is distJS1(I,J)=2||||=||||=||||IΔJIJJI.(2.17) The corresponding matrices JS𝑘𝑛 are defined by JS𝑘𝑛IJ=1ifdistJS(I,J)=𝑘,0,otherwise.(2.18) As it is known, [1, page 36], that a basis for the centralizer algebra is given by the orbits of the group 𝐺2, acting on pairs, the Johnson basis is a basis for the centralizer algebra. Since the Johnson distance is symmetric, it suffices to look at 𝐺2.

Now, we come to the question that is a starting point for this work. If 𝐼 and 𝐽 are the only matrices commuting with all elements (as matrices) of the symmetric group, then since the map 𝐺𝐺 is a homomorphism, we know that 𝐼 and 𝐽 are in the centralizer algebra of 𝐺. The question is how to obtain the rest? The, perhaps surprising, answer is that in fact one can obtain the complete Johnson basis from 𝐼 and 𝐽 alone. This will be one of the main results, Theorem 4.1.

2.4.3. Permanent of 𝑠𝐼+𝑡𝐽

First, let us consider 𝑠𝐼+𝑡𝐽.

Proposition 2.6. One has the formula per(𝑠𝐼+𝑡𝐽)=𝑛!0𝑛𝑠𝑡𝑛.!(2.19)

Proof. For 𝑋=𝐽, we see directly, since all entries equal one in all submatrices, that 𝐽IJ=!(2.20) for all I and J. Taking traces, tr𝐽=𝑛!,(2.21) and by the trace formula, Theorem 2.2, per(𝑠𝐼+𝑡𝐽)=𝑛!𝑠𝑛𝑡=𝑛!𝑠(𝑛)!𝑛𝑡.(2.22) Reversing the order of summation yields the result stated.

Corollary 2.7. For varying 𝑛, one will explicitly denote 𝑝𝑛(𝑠,𝑡)=per(𝑠𝐼𝑛+𝑡𝐽𝑛), then, with 𝑝0(𝑠,𝑡)=1, 𝑛=0𝑧𝑛𝑝𝑛!𝑛(=𝑒𝑠,𝑡)𝑠𝑧.1𝑡𝑧(2.23)

The Corollary exhibits the operational formula 𝑝𝑛1(𝑠,𝑡)=1𝑡𝐷𝑠𝑠𝑛,(2.24) where 𝐷𝑠=𝑑/𝑑𝑠. By inspection, this agrees with (2.19) as well.

Observe that (2.19) can be rewritten as per(𝑠𝐼+𝑡𝐽)=0(𝑠+𝑡𝑦)𝑛𝑒𝑦𝑑𝑦,(2.25) that is, these are “moment polynomials” for the exponential distribution with an additional scale parameter.

We proceed to examine these moment polynomials in detail.

3. Exponential Polynomials

For the exponential distribution, with density 𝑒𝑦 on (0,), the moment polynomials are defined as 𝑛(𝑥)=0(𝑥+𝑦)𝑛𝑒𝑦𝑑𝑦.(3.1) The exponential embeds naturally into the family of weights of the form 𝑥𝑚𝑒𝑥 on (0,) as for generalized Laguerre polynomials. We define correspondingly𝑛,𝑚(𝑥,𝑡)=0(𝑥+𝑡𝑦)𝑛(𝑡𝑦)𝑚𝑒𝑦𝑑𝑦,(3.2) for nonnegative integers 𝑛,𝑚, introducing a factor of 𝑦𝑚 and a scale factor of 𝑡. We refer to these as exponential moment polynomials.

Proposition 3.1. Observe the following properties of the exponential moment polynomials. (1)The generating function 1𝑡𝑚𝑚!𝑛=0𝑧𝑛𝑛!𝑛,𝑚(𝑒𝑥,𝑡)=𝑧𝑥(1𝑡𝑧)1+𝑚,(3.3) for |𝑡𝑧|<1. (2)The operational formula 1𝑡𝑚𝑚!𝑛,𝑚(𝑥,𝑡)=(𝐼𝑡𝐷)(𝑚+1)𝑥𝑛,(3.4) where 𝐼 is the identity operator and 𝐷=𝑑/𝑑𝑥. (3)The explicit form 𝑛,𝑚(𝑥,𝑡)=𝑛𝑗=0𝑛𝑗(𝑚+𝑗)!𝑥𝑛𝑗𝑡𝑚+𝑗.(3.5)

Proof. For the first formula, multiply the integral by 𝑧𝑛/𝑛! and sum to get 0𝑦𝑚𝑒𝑧𝑥+𝑧𝑡𝑦𝑦𝑑𝑦=𝑒𝑧𝑥0𝑦𝑚𝑒𝑦(1𝑡𝑧)𝑑𝑦,(3.6) which yields the stated result.
For the second, write 𝑡𝑚𝑚!(𝐼𝑡𝐷)(𝑚+1)𝑥𝑛=𝑡𝑚0𝑦𝑚𝑒(𝐼𝑡𝐷)𝑦𝑥𝑛𝑑𝑦=0(𝑡𝑦)𝑚𝑒𝑦(𝑥+𝑡𝑦)𝑛𝑑𝑦,(3.7) using the shift formula 𝑒𝑎𝐷𝑓(𝑥)=𝑓(𝑥+𝑎).
For the third, expand (𝑥+𝑡𝑦)𝑛 by the binomial theorem and integrate.

A variation we will encounter in the following is𝑛𝑚,𝑚(𝑥,𝑡)=𝑛𝑚𝑗=0𝑗𝑛𝑚(𝑚+𝑗)!𝑥𝑛𝑚𝑗𝑡𝑚+𝑗=(3.8)𝑛𝑗=𝑚𝑛𝑚𝑗𝑚𝑗!𝑥𝑛𝑗𝑡𝑗=(3.9)𝑛𝑗=𝑚𝑛𝑚𝑛𝑗𝑗!𝑥𝑛𝑗𝑡𝑗=(3.10)𝑛𝑚𝑗=0𝑗𝑛𝑚(𝑛𝑗)!𝑥𝑗𝑡𝑛𝑗,(3.11) replacing the index 𝑗𝑗𝑚 for (3.9) and reversing the order of summation for the last line. And for future reference, consider the integral formula𝑛𝑚,𝑚(𝑥,𝑡)=0(𝑥+𝑡𝑦)𝑛𝑚(𝑡𝑦)𝑚𝑒𝑦𝑑𝑦.(3.12)

3.1. Hypergeometric Form

Generalized hypergeometric functions provide expressions for the exponential moment polynomials that are often convenient. In the present context, we will use 2𝐹0 functions, defined by 2𝐹0||||𝑥=𝑎,𝑏---𝑗=0(𝑎)𝑗(𝑏)𝑗𝑥𝑗!𝑗,(3.13) where (𝑎)𝑗=Γ(𝑎+𝑗)/Γ(𝑎) is the usual Pochhammer symbol. In particular, if 𝑎, for example, is a negative integer, the series reduces to a polynomial. Rearranging factors in the expressions for n,m, via (3) in Proposition 3.1, and nm,m, (3.8), we can formulate these as 2𝐹0 hypergeometric functions.

Proposition 3.2. One has the following expressions for exponential moment polynomials: 𝑛,𝑚(𝑥,𝑡)=𝑥𝑛𝑡𝑚𝑚!2𝐹0||||𝑡𝑛,1+𝑚---𝑥,𝑛𝑚,𝑚(𝑥,𝑡)=𝑥𝑛𝑚𝑡𝑚𝑚!2𝐹0||||𝑡𝑚𝑛,1+𝑚---𝑥.(3.14)

4. Zeon Powers of 𝑠𝐼+𝑡𝐽

We want to calculate (𝑠𝐼+𝑡𝐽), that is, the (𝑛)×(𝑛) matrix with rows and columns labelled by -subsets I,J{1,,𝑛} with the IJ entry equal to the permanent of the corresponding submatrix of 𝑠𝐼+𝑡𝐽. This is equivalent to the induced action of the original matrix 𝑠𝐼+𝑡𝐽 on the th zeon space 𝒱.

Theorem 4.1. The th zeon power of 𝑠𝐼+𝑡𝐽 is given by (𝑠𝐼+𝑡𝐽)=𝑘𝑗=𝑘𝑘𝑗𝑗!𝑠𝑗𝑡𝑗JS𝑘𝑛=𝑘𝑘,𝑘(𝑠,𝑡)JS𝑘𝑛,(4.1) where the ’s are exponential moment polynomials.

Proof. Choose I and J with |I|=|J|=. By Theorem 2.1, we have, using the fact that all of the entries of 𝐽𝑗 are equal to 𝑗!, (𝑠𝐼+𝑡𝐽)IJ=0𝑗𝑠𝑗𝑡𝑗𝐴IJ||A||=𝑗𝐽𝑗IA,JA=0𝑗𝑠𝑗𝑡𝑗AIJ||A||=j𝑗!.(4.2) Now, if distJS(I,J)=𝑘, then |IJ|=𝑘, and there are 𝑘𝑗 subsets A of IJ satisfying the conditions of the sum. Hence the result.

Note that the specialization =𝑛, 𝑘=0, recovers (2.19).

We can write the above expansion using the hypergeometric form of the exponential moment polynomials, Proposition 3.2, (𝑠𝐼+𝑡𝐽)=𝑘𝑠𝑘𝑡𝑘𝑘!2𝐹0||||𝑡𝑘,1+𝑘---𝑠JS𝑘𝑛.(4.3)

4.1. Spectrum of the Johnson Matrices

Recall, for example, [2, page 220], that the spectrum of the Johnson matrices for given 𝑛 and is the set of numbersΛ𝑘𝑛(𝛼)=𝑖𝑖𝑖𝛼𝑛𝛼+𝑖𝑖𝑘𝑖(1)𝑘𝑖,(4.4) where the eigenvalue for given 𝛼 has multiplicity (𝑛𝛼)(𝑛𝛼1).

For -sets, the Johnson distance takes values from 0 to min(,𝑛), with 𝛼 taking values from that same range.

4.2. The Spectrum of (𝑠𝐼+𝑡𝐽)

Recall that as the Johnson matrices are symmetric and generate a commutative algebra, they are simultaneously diagonalizable by an orthogonal transformation of the underlying vector space. Diagonalizing the equation in Theorem 4.1, we see that the spectrum of (𝑠𝐼+𝑡𝐽) is given by 𝑘𝑘,𝑘(𝑠,𝑡)Λ𝑘𝑛(𝛼).(4.5)

Proposition 4.2. The spectrum of (𝑠𝐼+𝑡𝐽) is given by 𝑠𝛼𝑡𝑛𝛼(𝑛𝛼)!𝛼,𝑛𝛼(𝑠,𝑡)=𝑖𝑠𝑖𝑡𝑖𝑖𝑖𝛼𝑛𝛼+𝑖𝑖!,(4.6) for 0𝛼min(,𝑛), with respective multiplicities (𝑛𝛼)(𝑛𝛼1).

Proof. In the sum over 𝑖 in (4.4), only the last two factors involve 𝑘. We have 𝑘𝑘,𝑘(𝑠,𝑡)(1)𝑘𝑖=𝑖𝑘𝑖𝑘0(𝑠+𝑡𝑦)𝑘(𝑡𝑦)𝑘(1)𝑘𝑖𝑒𝑖𝑘𝑖𝑦=𝑑𝑦setting𝑘=𝑖+𝑚𝑚0(𝑠+𝑡𝑦)𝑖𝑚(𝑡𝑦)𝑖+𝑚(1)𝑚𝑚𝑒𝑖𝑦=𝑑𝑦0(𝑠+𝑡𝑦𝑡𝑦)𝑖(𝑡𝑦)𝑖𝑒𝑦𝑑𝑦=𝑠𝑖𝑡𝑖𝑖!,(4.7) using the binomial theorem to sum out 𝑚. Filling in the additional factors yields 𝑘𝑘,𝑘(𝑠,𝑡)Λ𝑘𝑛(𝛼)=𝑖𝑠𝑖𝑡𝑖𝑖𝑖,𝑖!𝛼𝑛𝛼+𝑖(4.8) Taking out a denominator factor of (𝑛𝛼)! and multiplying by 𝑠𝛼𝑡𝑛𝛼 gives 𝑖𝑠𝛼𝑖𝑡𝑛𝛼+𝑖𝑖𝛼(𝑛𝛼+𝑖)!,(4.9) which is precisely 𝛼,n𝛼 as in the third statement of Proposition 3.1.

As in Proposition 3.2, we can express the eigenvalues as follows.

Corollary 4.3. The spectrum of (𝑠𝐼+𝑡𝐽) consists of the eigenvalues 𝑠2𝐹0||||𝑡𝛼,1+𝑛𝛼---𝑠,(4.10) for 0𝛼min(,𝑛), with corresponding multiplicities as indicated above.

4.3. Row Sums and Trace Identity

For the row sums, we know that the all-ones vector is a common eigenvector of the Johnson basis corresponding to 𝛼=0. These are the valencies Λ𝑘(0). For the Johnson scheme, we have Λ𝑘𝑛𝑘𝑘(0)=𝑛.(4.11) for example, see [2, page 219], which can be checked directly from the formula for Λ𝑘𝑛(𝛼), (4.4), with 𝛼 set to zero. Setting 𝛼=0 in Proposition 4.2 gives1𝑡𝑛(𝑛)!,𝑛(𝑠,𝑡)=𝑖𝑖𝑖𝑛+𝑖𝑖!𝑠𝑖𝑡𝑖,(4.12) for the row sums of (𝑠𝐼+𝑡𝐽).

4.3.1. Trace Identity

Terms on the diagonal are the coefficient of JS0𝑛, which is the identity matrix. So, the trace is tr(𝑠𝐼+𝑡𝐽)=𝑛,0𝑛(𝑠,𝑡)=𝑘𝑘𝑘!𝑠𝑘𝑡𝑘.(4.13) Cancelling factorials and reversing the order of summation on 𝑘 yields the following formula.tr(𝑠𝐼+𝑡𝐽)=𝑛!(𝑛)!0𝑘𝑠𝑘𝑡𝑘𝑘!.(4.14)

Now, Proposition 4.2 gives the trace tr(𝑠𝐼+𝑡𝐽)=0𝛼min(,𝑛)𝑛𝛼𝑛𝛼1𝑖𝑠𝑖𝑡𝑖𝑖𝑖𝛼𝑛𝛼+𝑖𝑖!.(4.15)

Proposition 4.4. Equating the above expressions for the trace yields the identity 0𝛼min(,𝑛)𝑛𝛼𝑛𝛼1𝑖𝑠𝑖𝑡𝑖𝑖𝑖𝛼𝑛𝛼+𝑖𝑖!=𝑛!(𝑛)!0𝑗𝑠𝑗𝑡𝑗𝑗!.(4.16)

Example 4.5. For 𝑛=4, =2, we have 𝑠2+2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡22𝑡2𝑠𝑡+2𝑡2𝑠2+2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡22𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠2+2𝑠𝑡+2𝑡22𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡22𝑡2𝑠2+2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡22𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠2+2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡22𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠𝑡+2𝑡2𝑠2+2𝑠𝑡+2𝑡2.(4.17) One can check that the entries are in agreement with Theorem 4.1. The trace is 6𝑠2+12𝑠𝑡+12𝑡2. The spectrum is eigenvalue𝑠2+6𝑠𝑡+12𝑡2,withmultiplicity1,eigenvalue𝑠2+2𝑠𝑡,withmultiplicity3,eigenvalue𝑠2,withmultiplicity2,(4.18) and the trace can be verified from these as well.

Remark 4.6. What is interesting is that these matrices have polynomial entries with all eigenvalues polynomials as well, and furthermore, the exact same set of polynomials produces the eigenvalues as well as the entries. Specializing 𝑠 and 𝑡 to integers, a similar statement holds. All of these matrices will have integer entries with integer eigenvalues, all of which belong to closely related families of numbers. We will examine interesting cases of this phenomenon later on in this paper.

5. Permanents from 𝑠𝐼+𝑡𝐽

Here, we present a proof via recursion of the subpermanents of 𝑠𝐼+𝑡𝐽, thereby recovering Theorem 4.1 from a different perspective.

Remark 5.1. For the remainder of this paper, we will work with an 𝑛×𝑛 matrix corresponding to an × submatrix of the above discussion. Here, we have blown up the submatrix to full size as the object of consideration.

Let 𝑀𝑛, denote the 𝑛×𝑛 matrix with 𝑛 entries equal to 𝑠+𝑡 on the main diagonal, and 𝑡’s elsewhere. Note that 𝑀𝑛,0=𝑠𝐼+𝑡𝐽 and 𝑀𝑛,𝑛=𝑡𝐽, where 𝐼 and 𝐽 are 𝑛×𝑛. Define𝑃𝑛,𝑀=per𝑛,(5.1) to be the permanent of 𝑀𝑛,.

For =0, define 𝑃0,0=1, and, recalling (2.19),𝑃𝑛,0=per(𝑠𝐼+𝑡𝐽)=𝑛𝑗=0𝑛!𝑠𝑗!𝑗𝑡𝑛𝑗=𝑛𝑗=0𝑛!𝑠(𝑛𝑗)!𝑛𝑗𝑡𝑗.(5.2) We have also 𝑃𝑛,𝑛=per(𝑡𝐽)=𝑛!𝑡𝑛 for 𝐽 of order 𝑛×𝑛. These agree at 𝑃0,0=1.

Theorem 5.2. For 𝑛1, 1𝑛, one has the recurrence 𝑃𝑛,=𝑃𝑛,1𝑠𝑃𝑛1,1.(5.3)

Proof. We have 0𝑛 so 𝑛(1)=𝑛+11, that is, the matrix 𝑀𝑛,1 contains at least 1 entry on its main diagonal equal to 𝑠+𝑡. Write the block form 𝑀𝑛,1=𝐴𝑠+𝑡𝐴𝑇𝑀𝑛1,1,(5.4) with 𝐴=[𝑡,𝑡,,𝑡] the 1×(𝑛1) row vector of all 𝑡s, and 𝐴𝑇 is its transpose. Now, compute the permanent of 𝑀𝑛,1 expanding along the first row. We get 𝑃𝑛,1𝑀=per𝑛,1=𝑀(𝑠+𝑡)per𝑛1,1+𝐹𝐴,𝐴𝑇,𝑀𝑛1,1,(5.5) where 𝐹(𝐴,𝐴𝑇,𝑀𝑛1,1) is the contribution to 𝑃𝑛,1 involving 𝐴. Now, 𝑀𝑡per𝑛1,1+𝐹𝐴,𝐴𝑇,𝑀𝑛1,1𝐴=per𝑡𝐴𝑇𝑀𝑛1,1𝐴=per𝑇𝑀𝑛1,1𝑀𝑡𝐴=per𝑛1,1𝐴𝑇𝐴𝑡=𝑃𝑛,.(5.6) Thus, from (5.5), 𝑃𝑛,1𝑀=𝑠per𝑛1,1𝑀+𝑡per𝑛1,1+𝐹𝐴,𝐴𝑇,𝑀𝑛1,1=𝑠𝑃𝑛1,1+𝑃𝑛,,(5.7) and hence the result.

We arrange the polynomials 𝑃𝑛, in a triangle, with the columns labelled by 0 and rows by 𝑛0, starting with 𝑃0,0=1 at the top vertex𝑃0,0𝑃1,0𝑃1,1𝑃2,0𝑃2,1𝑃2,2𝑃𝑛1,0𝑃𝑛1,𝑛2𝑃𝑛1,𝑛1𝑃𝑛,0𝑃𝑛,1𝑃𝑛,𝑛1𝑃𝑛,𝑛(5.8) The recurrence says that to get the 𝑛, entry, you combine elements in column 1 in rows 𝑛 and 𝑛1, forming an L-shape. Thus, given the first column {𝑃𝑛,0}𝑛0, the table can be generated in full.

Now, we check that these are indeed our exponential moment polynomials. Additionally, we derive an expression for 𝑃𝑛, in terms of the initial sequence 𝑃𝑛,0. For clarity, we will explicitly denote the dependence of 𝑃𝑛, on (𝑠,𝑡).

Theorem 5.3. For 0, one has (1)the permanent of the 𝑛×𝑛 matrix with 𝑛 entries on the diagonal equal to 𝑠+𝑡 and all other entries equal to 𝑡 is𝑃𝑛,(𝑠,𝑡)=𝑛,(𝑠,𝑡)=𝑛𝑗=𝑛𝑛𝑗𝑗!𝑠𝑛𝑗𝑡𝑗,(5.9)(2)𝑃𝑛,(𝑠,𝑡)=𝑗=0𝑗(1)𝑗𝑠𝑗𝑃𝑛𝑗,0(𝑠,𝑡),(5.10)(3) the complementary sum is 𝑠𝑛=𝑛𝑗=0𝑛(1)𝑃𝑛,(𝑠,𝑡).(5.11)

Proof. The initial sequence 𝑃𝑛,0=𝑛,0 as noted in (5.2). We check that 𝑛, satisfies recurrence (5.3). Starting from the integral representation for 𝑛+1,1, (3.2), we have 𝑛+1,1=0(𝑠+𝑡𝑦)𝑛+1(𝑡𝑦)1𝑒𝑦=𝑑𝑦0(𝑠+𝑡𝑦)(𝑠+𝑡𝑦)𝑛(𝑡𝑦)1𝑒𝑦𝑑𝑦=𝑠𝑛,1+𝑛,,(5.12) as required, where we now identify 𝑛+1,1=𝑃𝑛,1, 𝑛,1=𝑃𝑛1,1, and 𝑛,=𝑃𝑛,. And (3.10) gives an explicit form for 𝑃𝑛,.
For (2), starting with the integral representation for 𝑃𝑛,0=𝑛,0, we get 𝑗=0𝑗(1)𝑗𝑠𝑗0(𝑠+𝑡𝑦)𝑛𝑗𝑒𝑦𝑑𝑦=𝑗=0𝑗(1)𝑗𝑠𝑗0(𝑠+𝑡𝑦)𝑛(𝑠+𝑡𝑦)𝑗𝑒𝑦=𝑑𝑦0(𝑠+𝑡𝑦)𝑛(𝑠+𝑡𝑦𝑠)𝑒𝑦𝑑𝑦=𝑛,,(5.13) as required. The proof for (3) is similar, using (3.12), 𝑃𝑛,=𝑛,=0(𝑠+𝑡𝑦)𝑛(𝑡𝑦)𝑒𝑦𝑑𝑦,(5.14) and the binomial theorem for the sum.

5.1. (𝑠𝐼+𝑡𝐽) Revisited

Now, we have an alternative proof of Theorem 4.1.

Lemma 5.4. Let I and J be -subsets of [𝑛] with distJS(I,J)=𝑘, then per(𝑠𝐼+𝑡𝐽)𝐼𝐽=𝑃,𝑘(𝑠,𝑡).(5.15)

Proof. Now, |IJ|=𝑘, so the submatrix (𝑠𝐼+𝑡𝐽)IJ is permutationally equivalent to the × matrix with 𝑘 entries 𝑠+𝑡 on its main diagonal and 𝑡s elsewhere, that is, to the matrix 𝑀,𝑘. Hence, by definition of 𝑃,𝑘(𝑠,𝑡), (5.1), we have the result.

Thus, the expansion in the Johnson basis is(𝑠𝐼+𝑡𝐽)=𝑘𝑘,𝑘(𝑠,𝑡)JS𝑘𝑛.(5.16)

Proof. Let I and J be -subsets of [𝑛] with Johnson distance 𝑘. By definition, the IJ entry of the LHS of (5.16) equals the permanent of the submatrix from rows I and columns J, per(𝑠𝐼+𝑡𝐽)IJ=𝑃,𝑘(𝑠,𝑡)=𝑘,𝑘(𝑠,𝑡), by Lemma 5.4 and Theorem 5.3(1). Now, on the RHS of (5.16), if distJS(I,J)=𝑘, the only nonzero contribution comes from the JS𝑘𝑛 term. This yields 𝑘,𝑘(𝑠,𝑡)×1=𝑘,𝑘(𝑠,𝑡) as required.

5.2. Elementary Subgraphs and Permanents

There is an approach to permanents of 𝑠𝐼+𝑡𝐽 via elementary subgraphs, based on that of Biggs [3] for determinants.

An elementary subgraph (see [3, page 44]) of a graph 𝐺 is a spanning subgraph of 𝐺 all of whose components are 0, 1, or 2 regular, that is, all of whose components are isolated vertices, isolated edges, or cycles of length 𝑗3.

Let 𝐾𝑛() be a copy of the complete graph 𝐾𝑛 with vertex set [𝑛] in which the first 𝑛 vertices [𝑛]={1,2,,𝑛} are distinguished. We may now consider the matrix 𝑀𝑛, as the weighted adjacency matrix of 𝐾𝑛() in which the weights of the distinguished vertices are 𝑠+𝑡, with all undistinguished vertices and all edges assigned a weight of 𝑡.

Let 𝐸 be an elementary subgraph of 𝐾𝑛(), then we describe 𝐸 as having 𝑑(𝐸) distinguished isolated vertices and 𝑐(𝐸) cycles. The weight of 𝐸, wt(𝐸), is defined aswt(𝐸)=(𝑠+𝑡)𝑑(𝐸)𝑡𝑛𝑑(𝐸),(5.17) a homogeneous polynomial of degree 𝑛.

This leads to an interpretation/derivation of 𝑃𝑛,(𝑠,𝑡) as the permanent per(𝑀𝑛,).

Theorem 5.5. One has the expansion in elementary subgraphs 𝑃𝑛,(𝑠,𝑡)=𝐸2𝑐(𝐸)wt(𝐸).(5.18)

Proof. Assign weights to the components of 𝐸 as follows:
each distinguished isolated vertex will have weight 𝑠+𝑡;
each undistinguished isolated vertex will have weight 𝑡;
each isolated edge will have weight 𝑡2;
and each 𝑗-cycle, 𝑗3, will have weight 𝑡𝑗.
To obtain wt(𝐸) in agreement with (5.17), we form the product of these weights over all components in 𝐸. The proof then follows along the lines of Proposition 7.2 of [3, page 44], slightly modified to incorporate isolated vertices and with determinant, “det,” replaced by permanent, “per,” ignoring the minus signs. Effectively, each term in the permanent expansion thus corresponds to a weighted elementary subgraph 𝐸 of the weighted 𝐾𝑛().

See Figure 2 for an example with 𝑛=3.

539030.fig.002
Figure 2: 𝑀3,, 𝐾3(), 𝑃3,, and the 5 weighted elementary subgraphs of 𝐾3() for =0,1,2,3. Distinguished vertices are shown in bold.
5.3. Associated Polynomials and Some Asymptotics

Thinking of 𝑠 and 𝑡 as parameters, we define the associated polynomials 𝑄𝑛(𝑥)=𝑛=0𝑛𝑥𝑃𝑛,.(5.19) As in the proof of (3) above, using the integral formula (3.12), we have𝑄𝑛(𝑥)=0(𝑠+𝑡𝑦+𝑥𝑡𝑦)𝑛𝑒𝑦=𝑑𝑦𝑗𝑛𝑗𝑠𝑗(1+𝑥)𝑛𝑗𝑡𝑛𝑗(𝑛𝑗)!=𝑛!𝑗𝑠𝑗(1+𝑥)𝑛𝑗𝑡𝑛𝑗.𝑗!(5.20) Comparing with (5.2), we have the following.

Proposition 5.6. Consider 𝑄𝑛(𝑥)=𝑛=0𝑛𝑥𝑃𝑛,(𝑠,𝑡)=𝑃𝑛,0(𝑠,𝑡+𝑥𝑡).(5.21)

And one has the following.

Proposition 5.7. As 𝑛, for 𝑥1, 𝑄𝑛(𝑥)𝑡𝑛(1+𝑥)𝑛𝑛!𝑒𝑠/(𝑡+𝑡𝑥),(5.22) with the special cases 𝑄𝑛(1)=𝑠𝑛,𝑄𝑛(0)=𝑃𝑛,0𝑡𝑛𝑛!𝑒𝑠/𝑡,𝑄𝑛(1)=𝑛𝑃𝑛,(2𝑡)𝑛𝑛!𝑒𝑠/(2𝑡).(5.23)

Proof. From (5.20), 𝑄𝑛(𝑥)=𝑛!𝑗𝑠𝑗(1+𝑥)𝑛𝑗𝑡𝑛𝑗𝑗!=𝑡𝑛(1+𝑥)𝑛𝑛!𝑛𝑗=01𝑗!𝑠/𝑡1+𝑥𝑗,(5.24) from which the result follows.

6. Generalized Derangement Numbers

The formula (2.19) is suggestive of the derangement numbers (see, e.g., [4, page 180]), 𝑑𝑛=𝑛!𝑛𝑗=0(1)𝑗.𝑗!(6.1) This leads to the following.

Definition 6.1. A family of numbers, depending on 𝑛 and , arising as the values of 𝑃𝑛,(𝑠,𝑡) when 𝑠 and 𝑡 are assigned fixed integer values, are called generalized derangement numbers.

We have seen that the assignment 𝑠=1,𝑡=1 produces the usual derangement numbers when =0. In this section, we will examine in detail the cases 𝑠=1,𝑡=1, generalized derangements, and 𝑠=𝑡=1, generalized arrangements.

Remark 6.2. Topics related to this material are discussed in Riordan [5]. The paper [6] is of related interest as well.

6.1. Generalized Derangements of [𝑛]

To start, define 𝐷𝑛,=𝑃𝑛,(1,1).(6.2) Equation (5.9) and Proposition 3.2 give𝐷𝑛,=𝑛𝑗=(1)𝑛𝑗𝑛𝑛𝑗𝑗!=(1)𝑛!2𝐹0||||1.𝑛,1+---(6.3) Equation (5.2) reads as per(𝐽𝐼)=𝐷𝑛,0=𝑑𝑛,(6.4) the number derangements of [𝑛]. So we have a combinatorial interpretation of 𝐷𝑛,0.

6.1.1. Combinatorial Interpretation of 𝐷𝑛,

We now give a combinatorial interpretation of 𝐷𝑛, for 1.

When 1, recurrence (5.3) for 𝑃𝑛,(1,1) gives𝐷𝑛,=𝐷𝑛,1+𝐷𝑛1,1.(6.5) We say that a subset I of [𝑛] is deranged by a permutation if no point of I is fixed by the permutation.

Proposition 6.3. 𝐷𝑛,0=𝑑𝑛, the number of derangements of [𝑛]. In general, for 0, 𝐷𝑛, is the number of permutations of [𝑛] in which the set {1,2,,𝑛} is deranged, with no restrictions on the -set {𝑛+1,,𝑛}.

Proof. For 0, let 𝐷𝑛, denote the set of permutations in the statement of the proposition. Let 𝐸𝑛,=|𝐷𝑛,|. We claim that 𝐸𝑛,=𝐷𝑛,.
The case =0 is immediate. We show that 𝐸𝑛, satisfies recurrence (6.5).
Now, let >0. Consider a permutation in 𝐷𝑛,. The point 𝑛 is either (1) deranged, or (2) not deranged (i.e., fixed). (1)If 𝑛 is deranged, then the (𝑛+1)-set {1,2,,𝑛,𝑛} is deranged. By switching 𝑛𝑛+1 in all permutations of 𝐷𝑛,, we obtain a permutation in 𝐷𝑛,1. Conversely, given any permutation of 𝐷𝑛,1, we switch 𝑛𝑛+1 to obtain a permutation in 𝐷𝑛, where 𝑛 is deranged. Hence, the number of permutations in 𝐷𝑛, with 𝑛 deranged equals 𝐸𝑛,1. (2)Here, 𝑛 is fixed, so if we remove 𝑛 from any permutation in 𝐷𝑛, we obtain a permutation in 𝐷𝑛1,1. Conversely, given a permutation in 𝐷𝑛1,1, we may include 𝑛 as a fixed point to obtain a permutation in 𝐷𝑛, with 𝑛 fixed. Hence, the number of permutations in 𝐷𝑛, with 𝑛 fixed equals 𝐸𝑛1,1. Combining the above two paragraphs shows that 𝐸𝑛, satisfies recurrence (6.5).

And a quick check, 𝐷𝑛,𝑛=𝑛!,(6.6) there being no restrictions at all in the combinatorial interpretation, in agreement with (6.3) for =𝑛.

Example 6.4. When 𝑛=3, we have 𝑑3=𝐷3,0=2 corresponding to the 2 permutations of [1] in which {1,2,3} is moved: 231,312.
Then, 𝐷3,1=3 corresponding to the 3 permutations of [1] in which {1,2} is moved: 213,231,312.
Then, 𝐷3,2=4 corresponding to the 4 permutations of [1] in which {1} is moved: 213,231,312,321.
Finally, 𝐷3,3=3!=6 corresponding to the 3 permutations of [1] in which is moved: 123,132,213,231,312,321.

Reversing the order of summation in (6.3) gives an alternative expression𝐷𝑛,=𝑛𝑗=0(1)𝑗𝑗𝑛(𝑛𝑗)!.(6.7)

Remark 6.5. Formulation (6.7) may be proved directly by inclusion-exclusion on permutations fixing given points.

Example 6.6. Consider 𝐷5,2=3𝑗=0(1)𝑗3𝑗30313233(5𝑗)!=5!4!+3!2!=12072+182=64.(6.8)

Now, from (2) of Theorem 5.3, 𝑠=1, and 𝑡=1, we have𝐷𝑛,=𝑗=0𝑗𝑑𝑛𝑗.(6.9) Here is a combinatorial explanation. To obtain a permutation in 𝐷𝑛,, we first choose 𝑗 points from {𝑛+1,,𝑛} to be fixed. Then, every derangement of the remaining (𝑛𝑗) points will produce a permutation in 𝐷𝑛,, and there are 𝑑𝑛𝑗 of such derangements.

Example 6.7. Consider 𝐷5,2=2𝑗=02𝑗𝑑5𝑗=20𝑑5+21𝑑4+22𝑑3=1×44+2×9+1×2=44+18+2=64.(6.10)

6.1.2. Permanents from 𝐽𝐼

Theorem 4.1 specializes to (𝐽𝐼)=min(,𝑛)𝑘=0𝐷,𝑘JS𝑘𝑛.(6.11) This can be written using the hypergeometric form (𝐽𝐼)=min(,𝑛)𝑘=0(1)𝑘𝑘!2𝐹0||||1𝑘,1+𝑘---JS𝑘𝑛,(6.12) with spectrum eigenvalue(1)2𝐹0||||1,𝑛𝛼𝑛,𝛼,𝛼+𝑛+1---occurringwithmultiplicity𝛼1(6.13) by Corollary 4.3 and Proposition 4.2.

The entries of (𝐽𝐼) are from the set of numbers 𝐷𝑛,. For the spectrum, start with 𝛼=0. From (6.3), we have (1)2𝐹0||||1=1,𝑛+1---𝐷(𝑛)!𝑛,𝑛.(6.14) As 𝛼 increases, we see that the spectrum consists of the numbers (1)𝛼𝐷(𝑛𝛼)!𝑛2𝛼,𝑛𝛼.(6.15) Think of moving in the derangement triangle, as in the appendix, starting from position 𝑛,𝑛, rescaling the values by the factorial of the column at each step, then the eigenvalues are found by successive knight's moves, up 2 rows and one column to the left, with alternating signs.

Example 6.8. For 𝑛=5, =3, we have (𝐽𝐼)3=2333343344323343343433243343343342333443343323434343333244333343442333343434323343344333234443333332,(6.16) with characteristic polynomial 𝜆5(𝜆32)(𝜆+3)4.(6.17)

Remark 6.9. Except for =2, the coefficients in the expansion of (𝐽𝐼) in the Johnson basis will be distinct. Thus, the Johnson basis itself can be read off directly from (𝐽𝐼). In this sense, the centralizer algebra of the action of the symmetric group on -sets is determined by knowledge of the action of just 𝐽𝐼 on -sets.

6.2. Generalized Arrangements of [𝑛]

Given [𝑛], 0𝑗𝑛, a 𝑗-arrangement of [𝑛] is a permutation of a 𝑗-subset of [𝑛]. The number of 𝑗-arrangements of [𝑛] is 𝐴(𝑛,𝑗)=𝑛!.(𝑛𝑗)!(6.18) Note that there is a single 0-arrangement of [𝑛], from the empty set.

Define 𝐴𝑛,=𝑃𝑛,(1,1). So, similar to the case for derangements, (5.9) gives𝐴𝑛,=𝑛𝑗=𝑛𝑛𝑗𝑗!=!2𝐹0||||.𝑛,1+---1(6.19) Now, define 𝑎𝑛=𝐴𝑛,0, so 𝑎𝑛=per(𝐼+𝐽)=𝑛𝑗=0𝑛!=(𝑛𝑗)!𝑛𝑗=0𝐴(𝑛,𝑗)(6.20) is the total number of j-arrangements of [𝑛] for 𝑗=0,1,,𝑛. Thus, we have a combinatorial interpretation of 𝐴𝑛,0.

6.2.1. Combinatorial Interpretation of 𝐴𝑛,

We now give a combinatorial interpretation of 𝐴𝑛, for 1.

When 1, recurrence (5.3) for 𝑃𝑛,(1,1) gives𝐴𝑛,=𝐴𝑛,1𝐴𝑛1,1.(6.21)

Proposition 6.10. 𝐴𝑛,0=𝑎𝑛, the total number of arrangements of [𝑛]. In general, for 0, 𝐴𝑛, is the number of arrangements of [𝑛] which contain {1,2,,}.

Proof. For 0, let 𝐴𝑛, denote the set of arrangements of [𝑛] which contain []. With [0]=, we note that 𝐴𝑛,0 is the set of all arrangements. Let 𝐵𝑛,=|𝐴𝑛,|. We claim that 𝐵𝑛,=𝐴𝑛,.
The initial values with =0 are immediate. We show that 𝐵𝑛, satisfies recurrence (6.21).
Consider 𝐴𝑛,1. Let 𝐴𝐴𝑛,1, so 𝐴 is an arrangement of [𝑛] containing [1]. If =1, then 𝐴𝐴𝑛,0 is any arrangement. Now, either 𝐴 or A.
If 𝐴, then 𝐴𝐴𝑛,, and so the number of arrangements in 𝐴𝑛,1 which contain equals 𝐵𝑛,.
If A, then by subtracting 1 from all parts of 𝐴 which are +1, we obtain an arrangement of [𝑛1] which contains [1], that is, an arrangement in 𝐴𝑛1,1. Conversely, given an arrangement in 𝐴𝑛1,1, adding 1 to all parts yields an arrangement in 𝐴𝑛,1 which does not contain . Hence, the number of arrangements in 𝐴𝑛,1 which do not contain equals 𝐵𝑛1,1.
We conclude that 𝐵𝑛,1=𝐵𝑛,+𝐵𝑛1,1; hence, This is the result.

Example 6.11. When 𝑛=3, we have 𝑎3=𝐴3,0=16 corresponding to the 16 arrangements of [1]: [],1,2,3,12,21,13,31,23,32,123,132,213,231,312,321.
Then, 𝐴3,1=11 corresponding to the 11 arrangements of [1] which contain {1}: 1,12,21,13,31,123,132,213,231,312,321.
Then, 𝐴3,2=8 corresponding to the 8 arrangements of [1] which contain {1,2}: 12,21,123,132,213,231,312,321.
Finally, 𝐴3,3=3!=6 corresponding to the 6 arrangements of [1] which contain {1,2,3}: 123,132,213,231,312,321.

Rearranging the factors in (5.9), we have 𝑃𝑛,(𝑠,𝑡)=𝑛𝑗=𝐴(𝑗,)𝐴(𝑛,𝑗)𝑠𝑛𝑗𝑡𝑗,(6.22) With 𝑠=𝑡=1, this gives𝐴𝑛,=𝑛𝑗=𝐴(𝑗,)𝐴(𝑛,𝑗).(6.23) Here is a combinatorial explanation of (6.23).

For any 𝑗, to obtain a 𝑗-arrangement 𝐴 of [𝑛] containing [], we may place the points of {1,2,,} into these j positions in 𝐴(𝑗,) ways. Then, the remaining (𝑗) positions in 𝐴 can be filled in by a (𝑗)-arrangement of the unused (𝑛) points in 𝐴(𝑛,𝑗) ways.

Example 6.12. Consider 𝐴5,2=5𝑗=2𝐴(𝑗,2)𝐴(3,𝑗2)=𝐴(2,2)𝐴(3,0)+𝐴(3,2)𝐴(3,1)+𝐴(4,2)𝐴(3,2)+𝐴(5,2)𝐴(3,3)=2×1+6×3+12×6+20×6=2+18+72+120=212.(6.24) Finally, from (2) of Theorem 5.3, 𝑠=1, and 𝑡=1, we have 𝐴𝑛,=𝑗=0(1)𝑗𝑗𝑎𝑛𝑗.(6.25)

Example 6.13. Consider 𝐴5,2=2𝑗=0(1)𝑗2𝑗𝑎5𝑗=20𝑎521𝑎4+22𝑎3=1×3262×65+1×16=326130+16=212.(6.26)

6.2.2. Permanents from 𝐼+𝐽

Theorem 4.1 specializes to (𝐼+𝐽)=min(,𝑛)𝑘=0𝐴,𝑘JS𝑘𝑛.(6.27) This can be written using the hypergeometric form (𝐼+𝐽)=min(,𝑛)𝑘=0𝑘!2𝐹0||||𝑘,1+𝑘---1JS𝑘𝑛,(6.28) with spectrum eigenvalue2𝐹0||||,𝑛𝛼𝑛,𝛼,𝛼+𝑛+1---1occurringwithmultiplicity𝛼1(6.29) by Corollary 4.3 and Proposition 4.2.

Example 6.14. For 𝑛=5, =3, we have (𝐼+𝐽)3=16111111118111188111611118111181181111168111181111811118161111118811118111116118118118111111111688111111118118816111111118118118111611118111188111111161188811111111111116,(6.30) with characteristic polynomial (𝜆106)(𝜆11)4(𝜆2)5.(6.31) As for the case of derangements, the Johnson basis can be read off directly from the matrix (𝐼+𝐽).

Appendix

Generalized Derangement Numbers and Integer Sequences
The first two columns of the 𝐷𝑛, triangle, 𝐷𝑛,0 and 𝐷𝑛,1, give sequences A000166 and A000255 in the On-Line Encyclopedia of Integer Sequences [7]. The comments for A000255 do not contain our combinatorial interpretation.
The first two columns of the 𝐴𝑛, triangle, 𝐴𝑛,0 and 𝐴𝑛,1, give sequences A000522 and A001339. The comments contain our combinatorial interpretation. The next two columns, 𝐴𝑛,2 and 𝐴𝑛,3, give sequences A001340 and A00134; here, our combinatorial interpretation is not mentioned in the comments.

Generalized Derangement Triangles
=0 is the leftmost column. The rows correspond to n from 0 to 9. Values of 𝐷𝑛,
1000000000010000000011200000002346000000911141824000004453647896120000026530936242650460072000018542119242827903216372043205040001483316687188062123424024272403096035280403200133496148329165016183822205056229080256320287280322560362880.(A.1)
Values of 𝐴𝑛,
10000000002100000000532000000016118600000065493830240000032626121217414412000001957163113701158984840720000137001174310112874275846600576050400010960195901841587404665304577205112045360403200986410876809780908696750622704557400499680448560403200362880.(A.2)

Exponential polynomials 𝑛,𝑚(𝑠,𝑡)
Note that, as is common for matrix indexing, we have dropped the commas in the numerical subscripts𝑛=000=1,01=𝑡,02=2𝑡2,03=6𝑡3,04=24𝑡4,(A.3)𝑛=110=𝑠+𝑡,11=𝑠𝑡+2𝑡2,12=2𝑠𝑡2+6𝑡3,13=6𝑠𝑡3+24𝑡4,14=24𝑠𝑡4+120𝑡5,(A.4)𝑛=220=𝑠2+2𝑠𝑡+2𝑡2,21=𝑠2𝑡+4𝑠𝑡2+6𝑡3,22=2𝑠2𝑡2+12𝑠𝑡3+24𝑡4,23=6𝑠2𝑡3+48𝑠𝑡4+120𝑡5,24=24𝑠2𝑡4+240𝑠𝑡5+720𝑡6,(A.5)𝑛=330=𝑠3+3𝑠2𝑡+6𝑠𝑡2+6𝑡3,31=𝑠3𝑡+6𝑠2𝑡2+18𝑠𝑡3+24𝑡4,32=2𝑠3𝑡2+18𝑠2𝑡3+72𝑠𝑡4+120𝑡5,33=6𝑠3𝑡3+72𝑠2𝑡4+360𝑠𝑡5+720𝑡6,34=24𝑠3𝑡4+360𝑠2𝑡5+2160𝑠𝑡6+5040𝑡7,(A.6)