Journal of Discrete Mathematics

Volume 2014, Article ID 529804, 7 pages

http://dx.doi.org/10.1155/2014/529804

## The Coarse Structure of the Representation Algebra of a Finite Monoid

Department of Mathematics, Bar-Ilan University, 52900 Ramat Gan, Israel

Received 30 June 2013; Accepted 12 November 2013; Published 30 January 2014

Academic Editor: Nantel Bergeron

Copyright © 2014 Mary Schaps. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Let be a monoid, and let be a commutative idempotent submonoid. We show that we can find a complete set of orthogonal idempotents of the monoid algebra of such that there is a basis of adapted to this set of idempotents which is in one-to-one correspondence with elements of the monoid. The basis graph describing the Peirce decomposition with respect to gives a coarse structure of the algebra, of which any complete set of primitive idempotents gives a refinement, and we give some criterion for this coarse structure to actually be a fine structure, which means that the nonzero elements of the monoid are in one-to-one correspondence with the vertices and arrows of the basis graph with respect to a set of primitive idempotents, with this basis graph being a canonical object.

#### 1. Introduction

When we speak of a coarse structure, we mean the decomposition of the monoid algebra into Peirce components corresponding to the elements of a commutative idempotent submonoid. The fine structure is then the refinement which occurs when one breaks down each of these idempotents into a sum of primitive idempotents. The basic idea of this work is to try to understand the semigroup representation theory insofar as possible without delving into the group theory and to determine criteria for monoids for which there is a coarse structure which corresponds to the fine structure.

We assume that the field over which we are taking representations of a monoid is of characteristic which does not divide the order of any of the maximal subgroups of , as runs over the idempotents of . Then Maschke's theorem applies and the group algebras of the maximal subgroups are all semisimple.

The irreducible representations of the semigroup correspond to the disjoint union of the irreducible representations of the various maximal subgroups. Thus, one condition we will need for the coarse structure to coincide with the fine structure is that the monoid be aperiodic, that is to say, that maximal subgroups be trivial.

The quiver of a finite dimensional algebra is a combinatorial object of central importance in studying its representations theory. There has been in recent years a great deal of interest in determining properties of the quiver and relations of the monoid algebra, as in the work of Saliola, [1] and of Margolis and Steinberg [2]. The object used in this paper, the basis graph, is closely related to the quiver. For some purposes, particularly deformation theory, the basis graph is preferable, and in this work we claim that for certain types of monoids, the basis graph gives a much clearer picture of the relationship between the monoid and the monoid algebra than the quiver does. These are the monoids for which the coarse structure described above can be obtained for a complete set of primitive idempotents, and one of the main results of the paper is that this happens when the monoid is aperiodic and has commutative idempotents. Examples of this phenomenon given below include the matrix monoid and the set of order-preserving and extensive maps from a finite set into itself.

Let us comment that there has also been an other recent work on trying to unravel the algebra structure of the monoid algebra from invariants of the monoid. Thiéry [3] is able to derive information about the Cartan matrix of the algebra from combinatorial Cartan matrix of the monoid, and similarly the third section in Denton's thesis [4] takes up this theme. The anonymous referee pointed out the similarity of our concept to the third section of [5], and we have in fact switched from our original notation to a version of their notation “lfix” and “rfix” to emphasize this similarity. In the case given in our Section 5 in which the coarse and fine structures coincide, the entries in the matrix appear to be the sizes of what we call the Peirce sets of the monoid, defined in our Section 3.

Let be a finite monoid, and let be its set of idempotents. We recall that the -class of an element of is the set of all such the . A -class is regular if it contains an idempotent. Two idempotents and are called *conjugate* if they lie in the same -class, and this happens if and only if there are elements in the monoid such that and .

#### 2. The Basis Graph of a Finite Dimensional Algebra

For any complete, orthogonal set of idempotents in a finite dimensional algebra with Jacobson radical , we can define a basis graph [6] to be the directed graph with one vertex labeled for each idempotent and the following loops and arrows:(i) loops of weight zero,(ii) arrows or loops of weight for or , where

*Definition 1. *A basis of which is a union of bases for the different Peirce components is said to *respect the idempotent set* .

A refinement of is a complete set of orthogonal idempotents such that every element of is a sum of elements of . If we have such a refinement, then we get a refinement of the basis graph, in which some of the loops are replaced by vertices and arrows. For example, the upper triangular matrices over a field , considered with respect to a set of idempotents containing only the identity matrix, would have a basis graph consisting of one vertex and two loops, one of weight zero and one of weight 1. We can take a refinement of the idempotent set consisting of the idempotent diagonal matrices and . The refined basis graph will have two vertices corresponding to the two idempotents and one arrow of weight one.

The basis graph for any complete orthogonal set of *primitive* idempotents is an invariant of the algebra, since any two such sets are conjugate. If the algebra is basic, then the quiver is with respect to right modules just the basis graph with all arrows of weight greater than one erased, since the quiver is defined by considering the dimension of (if one uses left modules, one must take the dual). If the algebra is not basic, then all matrix blocks, together with the arrows of weight zero representing matrix units, must be shrunk to points in the quiver, with a corresponding coalescence of arrows.

#### 3. Finite Monoids with a Chosen Commutative Idempotent Submonoid

Our aim in this section is to construct an alternative basis of which still corresponds one-to-one to the set of elements of the monoid but which behaves well with respect to the basis graph defined above. In particular, we will want the idempotents to be orthogonal and the other basis elements to respect the idempotent set as in Definition 1.

We now consider a finite monoid with a chosen commutative idempotent submonoid . If has an absorbing zero element , we assume that is also in . A trivial choice would be to take to be the identity element, together with if it exists. Because of the commutativity, the set is partially ordered by the relation and is a semilattice under this ordering, with the greatest lower bound of two elements being their product. Since contains a maximal element, the identity, then is also a lattice, since the least upper bound of two elements and is the greatest lower bound of all the elements of which dominate both and , and this set is nonempty because the identity always dominates both and .

*Definition 2. *Let be a monoid with operation . Let be a commutative idempotent submonoid. For an element of the monoid , the *left **-idempotent* is the product of all the idempotents in the lattice such that . Similarly, the *right **-idempotent* is the product of all the idempotents in the lattice such that . At least one such idempotent always exists, since the identity of the monoid satisfies the condition.

*Remark 3. *If has an absorbing zero , then is not the left or right idempotent of any element except itself.

We now pass to the reduced monoid algebra of the monoid over a field , where is a field sufficiently large that the quotient of by its radical splits completely as a sum of matrix blocks over . One might, for example, take to be algebraically closed. If does not contain an absorbing zero element, then we set and and define and . If does contain a necessarily unique zero element , then we let be the quotient with subalgebra . This reduction is made because the full monoid algebra is isomorphic to a direct product of an algebra isomorphic to and a copy of the representing the ideal , which gives, among other disadvantages, an algebra which is decomposable and disconnected quiver, where algebra representation theorists prefer to work with indecomposable algebras. From the point of view of representation theory the object of study is the algebra . We define sets and . Note that in the second case and may no longer be monoids, since the product of two elements different from might be .

For each , we let be equal in the case without the absorbing zero and in the case with the absorbing zero. This will allow us to treat both cases together. We take as bases of and the sets and , respectively.

*Definition 4. *A *multiplicative basis* in a finite dimensional algebra is a basis such that the product of two elements of the basis is either zero or an element of the basis. Thus is a multiplicative basis of if and only if is the semigroup algebra of .

The basis is a multiplicative basis of the reduced monoid algebra defined above and is a multiplicative basis for .

By the Munn-Ponizovskii theorem [7], the isomorphism classes of simple modules are in one-to-one correspondence with the simple modules of the various isomorphism classes of maximal subgroups, one for each -class, and we also assume that the characteristic of does not divide the orders of any of the maximal subgroups.

It is a standard fact about rings that if and are idempotents satisfying then is an idempotent orthogonal to . By a construction of Solomon [8], the reduced monoid algebra , which is a subalgebra of , has a basis of orthogonal idempotents in one-to-one correspondence with the elements of , obtained by a process of Moebius inversion. These idempotents are actually primitive in , but they are not necessarily primitive in , as the example of taking to be the identity demonstrates. The Moebius function for elements of a partially ordered set is defined recursively to be if , to be if , and to be for .

*Definition 5. *For each , let be the corresponding primitive idempotent of . The set of all the will be denoted by .

For any , let

Note that each idempotent is the sum of an element for with a linear combination of lower idempotents . The set of all will be denoted by , so that for each , we have an element of the multiplicative basis of and an element of the set , also of . Our aim in this section is to prove that is an alternative basis of which behaves well with respect to the basis graph.

Lemma 6. *An idempotent is its own left and right -idempotent.*

*Proof. *Obviously . However, if is any other element of such that , then by the definition of the partial ordering on idempotents . This shows that is a minimal right idempotent, and the proof on the left is dual.

We define a collection of subsets of by
This is the full collection of -*Peirce sets* of if there is no zero in the monoid. When there is a zero , there would be an additional Peirce set , which we ignore because it vanishes under the passage to the reduced monoid algebra. In mild abuse of notation, we refer to the sets in as -Peirce sets.

We impose a linear ordering on which is subordinate to the natural partial ordering on and then a linear ordering on which is subordinate to the ordering on . We impose the same ordering on . We now make the following claim.

Lemma 7. *The linear transformation which maps to , written in the ordered basis , is upper triangular with on the diagonal and thus invertible.*

*Proof. *By the construction of the primitive idempotents of , the idempotent is given by with coefficient and a linear combination of lower idempotents. Thus is the sum of with coefficient and a linear combination of elements of from strictly smaller -Peirce sets. This gives the desired result.

Proposition 8. *The set is a basis for whose elements lie in the components of the Peirce decomposition of by the idempotent set . Thus the dimension of each Pierce component can be obtained from the monoid by calculating the number of elements with left idempotent and right idempotent .*

*Proof. *By Lemma 7, the matrix mapping to is invertible. Since is a basis of , so is .

Since every has left and right idempotents from , the set respects the set of idempotents as in Definition 1, and each lies in an -Peirce component. Since they are linearly independent, the number of in any -Peirce component of is less than or equal to the dimension, but since the total number of elements in is equal to the dimension of , each inequality must in fact be equality.

This proposition demonstrates the possibility of choosing a coarse set of idempotents which, after conversion to appropriate representation algebra idempotents using inclusion-exclusion methods, will give a one-to-one correspondence between semigroup elements and basis elements for a basis of the monoid algebra respecting , thus indicating that the aspects of the monoid algebra visible at this level of refinement are natural to the semigroup. To get the maximum use out of the proposition, one should choose the coarse set of idempotents as fine as possible without losing commutativity. The best situation of all is that in which the set of idempotents is actually a complete set of primitive idempotents, and we will address that case in the last section.

#### 4. Examples

In general will not be a multiplicative basis, and even when it is, as in Example 1 below, the multiplication will not necessarily coincide with the multiplication in the monoid, that is, we may have .

Our first example is a very natural one, in which is a complete set of primitive idempotents.

*Example 1 (monoid with multiplicative basis in which the -operator does not respect multiplication). *Let be the monoid of matrix units , together with an identity element and a zero element . The monoid is taken to be the matrix units of the diagonal, together with and , while . The reduced monoid algebra is isomorphic to the semisimple algebra . In this case, since the poset of idempotents is given by for all , the inclusion and exclusion processes are very simple, giving for , for , and . Dividing by the ideal to get , we then have for . The basis graph is a directed graph with an isolated vertex for and vertices , with one arrow in each direction between any pair of the vertices. In this case the basis is multiplicative, since and . Note that this last equation shows that we do not always have , even when the basis is multiplicative.

This monoid has three -classes, one containing all the matrix units, one containing the identity, and one containing only . The quiver of consists of two points, since the equations and show that all of the idempotents in are conjugate, as defined in the Introduction.

Example 1 represents one extreme possibility, in which almost all idempotents of are in a single -class. The opposite extreme, in which each element of corresponds to a different -class, is actually a common situation for the important class of linear algebraic monoids, which have been extensively studied by Putcha [9].

*Definition 9. *A *cross-section lattice* for is a commutative idempotent submonoid which contains exactly one element from each regular -class.

Linear algebraic monoids arise as the Zariski closures of linear algebraic groups, but there are more general examples. The cross section lattices in the linear algebraic monoids arise as the set of idempotents inside a maximal torus. Most linear algebraic monoids are infinite, but they have finite versions defined over finite fields.

*Example 2 (monoid with cross section lattice). *If is any finite dimensional algebra, then its monoid is a linear algebraic monoid. Over a finite field, this will be a finite monoid. Let be a complete set of primitive idempotents for as an algebra, ordered so that , with being representatives of conjugacy classes of idempotents. If, as a concrete instance of this construction, was the matrix monoid over a field of two elements, then the matrix units , would be such a set of primitive idempotents, but they would all be conjugate, so we would have .

Returning to the general case, every subset , let . Then is a cross section lattice of . There are regular -classes in .

Most finite monoids do not have cross section lattices. When a cross-section lattice exists, it is a natural but not inevitable choice for the commutative idempotent submonoid .

*Example 3 (another monoid with cross section lattice). *Let be a poset with elements arranged so that implies that . Let be a submonoid of the matrices over the field of two elements, in which an element if . In particular, all the matrices in are upper triangular. Let be the set of all diagonal matrices, which are all idempotents because and are idempotents of the field. The submonoid contains the zero matrix and is a cross section lattice as in Definition 9 above.

The idempotents in are primitive only when the diagonal matrix has only one nonzero entry. Otherwise the submonoid has a nontrivial maximal subgroup , and in order to find a set of primitive idempotents for , we will have to decompose into its irreducible representations.

In an earlier paper, [10], we proposed an even coarser set of idempotents based on intervals along the diagonal.

There are many important examples of cross section lattices. The one we will consider with particular care, which motivated our search for a general coarse structure, is the monoid of endofunctions on a finite set. This monoid has no zero element.

*Example 4 (obtaining a complete decomposition of the identity into orthogonal idempotents for a specific representation of ). *Let be the monoid of functions from the set into itself, acting on the right, with composition as the operation. The monoid has elements. This monoid has a natural filtration by the two-sided ideals
with . We define a representation of as operating from the right on a vector space by sending to , the matrix with entries in positions , and everywhere else. Then if and only if has rank . If we denote the constant map with unique image by , then is the matrix with all ones in column . These are the only matrices in the image of of rank 1, so
These constant maps will play a special role in what follows, so we note for future reference that if is an arbitrary element of , then

We now fix a distinguished index and define embeddings of in by letting be the element of which acts like on and is constant, equal to , on the remaining elements. This embedding is a homomorphism of semigroups, though not a homomorphism of monoids. If we let represent the identity element of , then we set Note that and .

Since each is the image of an idempotent under a homomorphism of semigroups, it is itself an idempotent. Furthermore, so that, under the ordering of idempotents in a semigroup by which we have that the ordering of is according to the ordering of the indices and by the order of -classes. Furthermore, the set is, in fact, a cross section lattice in . We define which is the Moebius inversion for a chain. Then the idempotent is orthogonal to all the with and thus to all the idempotents with . A simple telescoping argument shows that so is a complete, orthogonal set of idempotents for .

*Definition 10. *The right rank of a map f is the width of its image, that is, the largest number in the image. If , then the last columns of are zero and the th column is nonzero. The left rank of a map is the index of the largest number such that , unless the map is a constant; in such case the left rank is . In terms of matrices, this means that is the lowest row of which is not equal to the first row.

In fact, the right rank of is the smallest such that , since acts as the identity on the first columns and the “tail” of does not act because all those columns are zero. The product is composition, but because the action is from the right, first acts and then . The left rank of is the smallest such that , since the first elements go as they go in , and the remainder go to . The constant map has right rank and left rank . The left and right ranks of are .

The basis graph of is the point . The basis graph of with respect to the idempotent set consists of the two vertices, an arrow from to given by the map , and a loop at given by the map which transposes and . This will be the appropriate coarse structure. If we refine this by splitting into two primitive idempotents, then the loop vanishes to produce the second idempotent and the arrow comes out of only one of the primitive idempotents, corresponding to the nontrivial representation of the corresponding maximal subgroup .

It should be pointed out that, where a choice is involved in the selection of the lattice , the operation of sending to may behave strangely with regard to the -classes. In the example of the mappings, , all the constant maps are idempotents lying in the lowest -class. By the choice we made of the lattice , remains an idempotent, whereas all the other become arrows in the radical. All the various choices are conjugate, obtained one from the other by permutations of the underlying set on which the mappings act. From the point of view of algebra representation theory this is quite natural, since the difference between two conjugate idempotents in a basic algebra is a linear combination of nonlooped arrows. However, it does mean that not all members of a -class have similar representations in the basis graph.

One solution to this problem, then, would be to restrict ourselves to -trivial monoids, those for which there is a unique idempotent in each regular -class. However, Example 1 above shows that this would be too restrictive.

Returning to the question of multiplicative bases, it is not hard to check, by direct calculation, in the case of of Example 4 above that the basis is multiplicative. We turn now to and the -dimensional component with right and left idempotents , in order to show that is not, in general, a multiplicative basis. The elements not in the radical correspond to the six permutations of 1, 2, and 3, and the elements in the radical-squared correspond under the mapping of to to all the arrangements of two distinct numbers such that is among them, and the first and last numbers are different. These are , , , , , , , and . Let us set and calculate A slightly tedious calculation of the sixteen elements in the product shows that eight cancel each other out, and the remaining eight correspond to Thus the basis is not multiplicative.

#### 5. The Fine Structure

In semigroup theory, a pseudovariety is a collection of finite semigroups closed under homomorphic images, subsemigroups, and finite direct products. Although we have been studying monoids, the property of being a monoid is not stable under taking subsemigroups, as we see from Example 1, since the matrix units together with are a semigroup. Since every semigroup can be converted into a monoid by adjoining an identity element, we will content ourselves with trying to find pseudovarieties for which there is a coarse structure which coincides with the fine structure. The point at which this correspondence most readily breaks down is at the point where the local subgroups are broken down into irreducible representations. Thus for discussing the fine structure we will consider only semigroups for which the maximal subgroups are all trivial. This is the pseudovariety of aperiodic semigroups.

*Example 5 (a -trivial monoid). *Consider the monoid of all partial 1-1 maps from to itself for some natural number . This has a submonoid , consisting of all the partial maps which are order preserving and extensive, which means that if is defined for some , then . These monoids are not just aperiodic, they are actually -trivial [5].

For this particular example, the connected components of the basis graph are determined by a natural number which is the common dimension of the domain and image. The right and left idempotents are the identity maps on the domain and image. The quiver is the Hasse diagram of the partial order on subsets and of size given by if there is an order-preserving, extensive . The basis graph is the entire diagram generated by the partial order.

The representation theory of -trivial monoids has received considerable attention recently, [5]. There is a homomorphism from a -trivial monoid into its lattice of idempotents, and when this is extended to the monoid algebras, the kernel corresponds to the radical in the representation algebra. There is a one-to-one correspondence between the idempotents of the monoid and the irreducible representations in the representation algebra. See [5] for details.

The idempotents of a -trivial monoid do not, in general, commute and thus we cannot apply our theory to every -trivial monoid. However, in Example 5 the idempotents are partial identity maps, which do commute with each other. Thus the idempotents in that example form a lattice in which the meet is given by multiplication. We conclude that the idempotents are all primitive and thus, by Proposition 8, the coarse and fine structures coincide.

The simplest case in which we can find a semilattice of idempotents is a case like Example 5 where all the idempotents commute, the variety . Ash [11] proved that if a semigroup lies in this pseudovariety, then it is a homomorphic image of a subsemigroup of an inverse semigroup, one for which to each element is associated a unique element such that and . By uniqueness, since for any idempotent the choice satisfies this condition, each idempotent is its own partial inverse.

*Example 6 (monoid where the coarse and fine structures coincide). *Let for some natural number , and let be the set of one-to-one order-preserving partial maps . The idempotents in this monoid are the partial identity maps and thus commute with each other. It is aperiodic because, for any idempotent , the only element in is , itself, for if is the common domain and range of , the only order-preserving one-to-one map from to itself is the identity. Although aperiodic, the monoid is not -trivial. The -classes are determined by the number of elements in , so that the -class of contains the idempotents corresponding to all the subset of with elements, as well as all order-preserving partial maps whose domain and range have elements. Since for each pair of sets and with elements there is a unique one-to-one order-preserving map between them, the principal factor corresponding to each -class has a reduced algebra isomorphic to the matrix algebra of the size of the number of idempotents in . Since these matrix algebras generate the matrix blocks of the Munn-Ponizovskii theorem, the idempotents are in fact primitive, since they correspond to the diagonal idempotents of the matrix blocks.

For any semigroup , an ideal is a subset closed under multiplication by elements of , and the Rees quotient semigroup is the semigroup obtained by replacing the elements of by a single zero element . The *principal factor* of a semigroup corresponding to a -class is , where and if then we understand the quotient to be . The semigroups which can appear as principal factors are of a limited number of types. They can be semigroups with zero multiplication, -simple semigroups, or ideal simple semigroups.

Proposition 11. *Let be a monoid with zero . If the monoid is aperiodic and has commuting idempotents, the coarse structure of with respect to the set of all non-zero idempotents will coincide with its fine structure. There is a one-to-one correspondence between the non-zero elements of the monoid and the set of vertices and arrows of the basis graphs.*

*Remark 12. *The monoid thus belongs to , the intersection of the pseudovariety of aperiodic semigroups and the pseudovariety of semigroups with commuting idempotents. We mention this notation because some of the theorems we rely on are phrased in the language of varieties.

*Proof. *Because we are in the pseudovariety **IC** we can choose to be the set of all idempotents and it will be a submonoid. We let be the set of regular -classes in , where is the class of and is the class of . By the Munn-Ponizovskii theorem, the radical quotient of the representation algebra of is of the form
where the are the maximal subgroups of the regular -class and is a number depending on the structure of the class . We have assumed that the maximal subgroups are trivial, so each regular -class gives a single matrix block. Furthermore, by the standard reference [12], we find that for a monoid in is exactly the number of idempotents in the -class.

By general properties of the semigroups in the pseudovariety , the set of elements in regular -classes is a submonoid of . Since each of these regular -classes contains an idempotent, the regular principal factors cannot have zero multiplication, and since we have assumed that has a zero, it cannot be ideal simple, so it must be -simple, and, in fact, completely -simple since it contains a regular regular -class. A regular semigroup with commuting idempotents is an inverse semigroup; that is, every element has a unique inverse belonging to the same -class as . Each regular factor is thus also an inverse subsemigroup and is, in fact, a Brandt semigroup, a completely -simple semigroup which is an inverse semigroup. It is also aperiodic, and a finite aperiodic Brandt semigroup is a semigroup of matrix units. Thus, for each regular -class for a monoid in , the principal factor is a matrix unit semigroup. If and are equivalent idempotents, there must be elements and such that and . In this case and are unique and are precisely the matrix units which transfer from to and back. The degree is equal to the number of idempotents equivalent to . The basis graph for the corresponding matrix block is the complete directed graph with a number of vertices equal to the degree of the matrix block, where for a directed graph, completeness gives one arrow in each direction between any pair of vertices.

Thus for each regular -class, the orthogonal idempotents obtained by inclusion-exclusion from the idempotents in the -class are equal in number to the dimension of the matrix block. If so, they must be primitive, and the coarse structure determined by this choice of idempotents does indeed coincide with the fine structure. Thus, by applying Proposition 8, we have a one-to-one correspondence between the elements of the monoid and the vertices and arrows of the basis graph.

If we were to require the semigroup idempotents to be primitive, then the regular part would have to be an annihilating sum of Brandt semigroups, as mentioned in the remark after Theorem 3 of [13]. However, since we only require the algebra idempotents after inclusion-exclusion to be primitive, we have access to a much richer collection of semigroups.

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### References

- F. V. Saliola, “The quiver of the semigroup algebra of a left regular band,”
*International Journal of Algebra and Computation*, vol. 17, no. 8, pp. 1593–1610, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Margolis and B. Steinberg, “Quivers of monoids with basic algebras,”
*Compositio Mathematica*, vol. 148, no. 5, pp. 1516–1560, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - N. M. Thiéry, “Cartan invariant matrices for finite monoids: expression and computation using characters,”
*DMTCS Proceedings*, pp. 887–898, 2012. View at Google Scholar - T. Denton,
*Excursions into Algebra and Combinatorics at q = 0*, 2012. - T. Denton, F. Hivert, A. Schilling, and N. M. Thiéry, “On the representation theory of finite $J$-trivial monoids,”
*Séminaire Lotharingien de Combinatoire*, vol. 64, article B64d, 2011. View at Google Scholar · View at MathSciNet - M. Schaps, “Deformations of finite-dimensional algebras and their idempotents,”
*Transactions of the American Mathematical Society*, vol. 307, no. 2, pp. 843–856, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - W. D. Munn, “Matrix representations of semigroups,”
*Proceedings of the Cambridge Philosophical Society*, vol. 53, pp. 5–12, 1957. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. Solomon, “The Burnside algebra of a finite group,”
*Journal of Combinatorial Theory*, vol. 2, pp. 603–615, 1967. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. S. Putcha,
*Linear Algebraic Monoids*, vol. 133 of*London Mathematical Society Lecture Note Series*, 1988. View at Publisher · View at Google Scholar · View at MathSciNet - M. Gerstenhaber and M. Schaps, “Finite posets and their representation algebras,”
*International Journal of Algebra and Computation*, vol. 20, no. 1, pp. 27–38, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. J. Ash, “Finite semigroups with commuting idempotents,”
*Australian Mathematical Society A*, vol. 43, no. 1, pp. 81–90, 1987. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - A. H. Clifford and G. B. Preston,
*The Algebraic Theory of Semigroups*, AMS, 1961. - G. Lallement and M. Petrich, “Some results concerning completely 0-simple semigroups,”
*Bulletin of the American Mathematical Society*, vol. 70, pp. 777–778, 1964. View at Publisher · View at Google Scholar · View at MathSciNet