Abstract

This paper investigates the necessary and sufficient condition for a set of (real or complex) matrices to commute. It is proved that the commutator [𝐴,𝐵]=0 for two matrices 𝐴 and 𝐵 if and only if a vector 𝑣(𝐵) defined uniquely from the matrix 𝐵 is in the null space of a well-structured matrix defined as the Kronecker sum 𝐴(𝐴), which is always rank defective. This result is extendable directly to any countable set of commuting matrices. Complementary results are derived concerning the commutators of certain matrices with functions of matrices 𝑓(𝐴) which extend the well-known sufficiency-type commuting result [𝐴,𝑓(𝐴)]=0.

1. Introduction

The problem of commuting operators and matrices, in particular, is very relevant in a significant number of problems of several branches of science, which are very often mutually linked, cited herein after.

(1) In several fields of interest in Applied Mathematics or Linear Algebra [122] including Fourier transform theory, graph theory where, for instance, the commutativity of the adjacency matrices is relevant [1, 1719, 2135], Lyapunov stability theory with conditional and unconditional stability of switched dynamic systems involving discrete systems, delayed systems, and hybrid systems where there is a wide class of topics covered including their corresponding adaptive versions including estimation schemes (see, e.g., [2341]). Generally speaking, linear operators, and in particular matrices, which commute share some common eigenspaces. On the other hand, a known mathematical result is that two graphs with the same vertex set commute if their adjacency matrices commute [16]. Graphs are abstract representations of sets of objects (vertices) where some pairs of them are connected by links (arcs/edges). Graphs are often used to describe behaviors of multiconfiguration switched systems where nodes represent each parameterized dynamics and arcs describe allowed switching transitions [35]. They are also used to describe automatons in Computer Science. Also, it has been proven that equalities of products involving two linear combinations of two any length products having orthogonal projectors (i.e., Hermitian idempotent matrices) as factors are equivalent to a commutation property [21].

(2) In some fields in Engineering, such as multimodel regulation and Parallel multiestimation [3641]. Generally speaking switching among configurations can improve the transient behavior. Switching can be performed arbitrarily (i.e., at any time instant) through time while guaranteeing closed-loop stability if a subset of the set of configurations is stable provided that a common Lyapunov function exists for them. This property is directly related to certain pair wise commutators of matrices describing configuration dynamics being zero [7, 10, 11, 14, 15]. Thus, the problem of commuting matrices is in fact of relevant interest in dynamic switched systems, namely, those which possess several parameterized configurations, one of them, is becoming active at each current time interval. If the matrices of dynamics of all the parameterizations commute then there exists a common Lyapunov function for all those parameterizations and any arbitrary switching rule operating at any time instant maintains the global stability of the switched rule provided that all the parameterizations are stable [7]. This property has been described also in [2325, 2830] and many other references therein. In particular, there are recent studies which prove that, in these circumstances, arbitrary switching is possible if the matrices of dynamics of the various configurations commute while guaranteeing closed-loop stability. This principle holds not only in both the continuous-time delay-free case and in the discrete-time one but even in configurations involving time-delay and hybrid systems as well. See, for instance, [1015, 2730, 3441] and references therein. The set of involved problems is wide enough like, for instance, switched multimodel techniques [2730, 35, 36, 40, 41], switched multiestimation techniques with incorporated parallel multiestimation schemes involving adaptive control [34, 3840], time delay and hybrid systems with several configurations under mutual switching, and so forth [10, 11, 14, 15] and references therein. Multimodel tools and their adaptive versions incorporating parallel multiestimation are useful to improve the regulation and tracking transients including those related to triggering circuits with regulated transient via multiestimation [36], master-slave tandems [39], and so forth. However, it often happens that there is no common Lyapunov function for all the parameterizations becoming active at certain time intervals. Then, a minimum residence (or dwelling) time at each active parameterization has to be respected before performing the next switching in order to guarantee the global stability of the whole switched system so that the switching rule among distinct parameterizations is not arbitrary [7, 12, 13, 2730, 3441].

(3) In some problems of Signal Processing. See, for instance, [1, 17, 18] concerning the construction of DFT (Discrete Fourier transform)-commuting matrices. In particular, a complete orthogonal set of eigenvectors can be obtained for several types of offset DFT’s and DCT’s under commutation properties.

(4) In certain areas of Physics, and in particular, in problems related to Quantum Mechanics. See, for instance, [22, 42, 43]. Basically, a complete set of commuting observables is a set of commuting operators whose eigenvalues completely specify the state of a system since they share eigenvectors and can be simultaneously measured [22, 42, 43]. These Quantum Mechanics tools have also inspired other Science branches. For instance, it is investigated in the above mentioned reference [18] a commuting matrix whose eigenvalue spectrum is very close to that of the Gauss-Hermite differential operator. It is proven that it furnishes two generators of the group of matrices which commute with the discrete Fourier transform. It is also pointed out that the associate research inspired in Quantum Mechanics principles. There is also other relevant basic scientific applications of commuting operators. For instance, the symmetry operators in the point group of a molecule always commute with its Hamiltonian operator [20]. The problem of commuting matrices is also relevant to analyze the normal modes in dynamic systems or the discussion of commuting matrices dependent on a parameter (see, e.g., [2, 3]).

It is well known that commuting matrices have at least a common eigenvector and also, a common generalized eigenspace [4, 5]. A less restrictive problem of interest in the above context is that of almost commuting matrices, roughly speaking, the norm of the commutator is sufficiently small [5, 6]. A very relevant related result is that the sum of matrices which commute is an infinitesimal generator of a 𝐶0-semigroup. This leads to a well-known result in Systems Theory establishing that the matrix function 𝑒𝐴1𝑡1+𝐴2𝑡2=𝑒𝐴1𝑡1𝑒𝐴2𝑡2 is a fundamental (or state transition) matrix for the cascade of the time invariant differential systems ̇𝑥1(𝑡)=𝐴1𝑥1(𝑡), operating on a time 𝑡1, and ̇𝑥2(𝑡)=𝐴2𝑥2(𝑡), operating on a time 𝑡2, provided that 𝐴1 and 𝐴2 commute (see, e.g., [711]).

Most of the abundant existing researches concerning sets of commuting operators, in general, and matrices, in particular, are based on the assumption of the existence of such sets implying that each pair of mutual commutators is zero. There is a gap in giving complete conditions guaranteeing that such commutators within the target set are zero. This paper formulates the necessary and sufficient condition for any countable set of (real or complex) matrices to commute. The sequence of obtained results is as follows. Firstly, the commutation of two real matrices is investigated in Section 2. The necessary and sufficient condition for two matrices to commute is that a vector defined uniquely from the entries of any of the two given matrices belongs to the null space of the Kronecker sum of the other matrix and its minus transpose. The above result allows a simple algebraic characterization and computation of the set of commuting matrices with a given one. It also exhibits counterparts for the necessary and sufficient condition for two matrices not to commute. The results are then extended to the necessary and sufficient condition for commutation of any set of real matrices in Section 3. In Section 4, the previous results are directly extended to the case of complex matrices in two very simple ways, namely, either by decomposing the associated algebraic system of complex matrices into two real ones or by manipulating it directly as a complex algebraic system of equations. Basically, the results for the real case are directly extendable by replacing transposes by conjugate transposes. Finally, further results concerning the commutators of matrices with matrix functions are also discussed in Section 4. The proofs of the main results in Sections 2, 3, and 4 are given in corresponding Appendices A, B, and C. It may be pointed out that there is implicit following duality of the main result. Since a necessary and sufficient condition for a set of matrices to commute is formulated and proven, the necessary and sufficient condition for a set of matrices not to commute is just the failure in the above one to hold.

1.1. Notation

[𝐴,𝐵] is the commutator of the square matrices 𝐴 and 𝐵.

𝐴𝐵=(𝑎𝑖𝑗𝐵) is the Kronecker (or direct) product of 𝐴=(𝑎𝑖𝑗) and 𝐵.

𝐴𝐵=𝐴𝐼𝑛+𝐼𝑛𝐵 is the Kronecker sum of the square matrices 𝐴=(𝑎𝑖𝑗) and both of order 𝑛, where 𝐼𝑛 is the nth identity matrix.

𝐴𝑇 is the transpose of the matrix 𝐴 and 𝐴 is the conjugate transpose of the complex matrix 𝐴. For any matrix 𝐴, Im𝐴 and Ker𝐴 are its associate range (or image) subspace and null space, respectively. Also, rank(𝐴) is the rank of 𝐴 which is the dimension of Im(𝐴) and det(𝐴) is the determinant of the square matrix 𝐴.

𝑣(𝐴)=(𝑎𝑇1,𝑎𝑇2,,𝑎𝑇𝑛)𝑇𝐂𝑛2 if 𝑎𝑇𝑖=(𝑎𝑖1,𝑎𝑖2,,𝑎𝑖𝑛) is the ith row of the square matrix 𝐴.

𝜎(𝐴) is the spectrum of 𝐴;𝑛={1,2,,𝑛}. If 𝜆𝑖𝜎(𝐴) then there exist positive integers 𝜇𝑖 and 𝜈𝑖𝜇𝑖 which are, respectively, its algebraic and geometric multiplicity; that is, the times it is repeated in the characteristic polynomial of 𝐴 and the number of its associate Jordan blocks, respectively. The integer 𝜇𝑛 is the number of distinct eigenvalues and the integer 𝑚𝑖, subject to 1𝑚𝑖𝜇𝑖, is the index of 𝜆𝑖𝜎(𝐴); 𝑖𝜇, that is, its multiplicity in the minimal polynomial of 𝐴.

𝐴𝐵 denotes a similarity transformation from 𝐴 to 𝐵=𝑇1𝐴𝑇 for given 𝐴,𝐵𝐑𝑛×𝑛 for some nonsingular 𝑇𝐑𝑛×𝑛. 𝐴𝐵=𝐸𝐴𝐹 means that there is an equivalence transformation for given 𝐴,𝐵𝐑𝑛×𝑛 for some nonsingular 𝐸,𝐹𝐑𝑛×𝑛.

A linear transformation from 𝐑𝑛 to 𝐑𝑛, represented by the matrix 𝑇𝐑𝑛×𝑛, is denoted identically to such a matrix in order to simplify the notation. If 𝑉Dom𝑇𝐑𝑛 is a subspace of 𝐑𝑛 then Im𝑇(𝑉)={𝑇𝑧𝑧𝑉} and Ker𝑇(𝑉)={𝑧𝑉𝑇𝑧=0𝐑𝑛}. If 𝑉𝐑𝑛, the notation is simplified to Im𝑇={𝑇𝑧𝑧𝐑𝑛} and Ker𝑇={𝑧𝐑𝑛𝑇𝑧=0𝐑𝑛}.

The symbols “’’ and “’’ stand for logic conjunction and disjunction, respectively. The abbreviation “iff’’ stands for “if and only if.’’ The notation card 𝑈 stands for the cardinal of the set 𝑈. 𝐶𝐴 (resp., 𝐶𝐴) is the set of matrices which commute (resp., do not commute) with a matrix 𝐴. 𝐶𝐀 (resp., 𝐶𝐀) is the set of matrices which commute (resp., do not commute) with all square matrix 𝐴𝑖 belonging to a given set 𝐀.

2. Results Concerning the Sets of Commuting and No Commuting Matrices with a Given One

Consider the sets 𝐶𝐴={𝑋𝐑𝑛×𝑛[𝐴,𝑋]=0}, of matrices which commute with A, and 𝐶𝐴={𝑋𝐑𝑛×𝑛[𝐴,𝑋]0}, of matrices which do not commute with 𝐴; 𝐴𝐑𝑛×𝑛. Note that 0𝐑𝑛×𝑛𝐶𝐴; that is, the zero n-matrix commutes with any n-matrix so that, equivalently, 0𝐑𝑛×𝑛𝐶𝐴 and then 𝐶𝐴𝐶𝐴=; 𝐴𝐑𝑛×𝑛. The subsequent two basic results which follow are concerned with commutation and noncommutation of two real matrices 𝐴 and 𝑋. The used tool relies on the calculation of the null space and the range space of the Kronecker sum of the matrix 𝐴, one of the matrices, with its minus transpose. A vector built with all the entries of the other matrix 𝑋 has to belong to one of the above spaces for 𝐴 and 𝑋 to commute and to the other one in order that 𝐴 and 𝑋 not to be two commuting matrices.

Proposition 2.1. (i) 𝐶𝐴={𝑋𝐑𝑛×𝑛𝑣(𝑋)Ker(𝐴(𝐴𝑇))}.
(ii) 𝐶𝐴=𝐑𝑛×𝑛CA={𝑋𝐑𝑛×𝑛𝑣(𝑋)Ker(𝐴(𝐴𝑇))}{𝑋𝐑𝑛×𝑛𝑣(𝑋)Im(𝐴(𝐴𝑇))}.
(iii) 𝐵𝐶𝐴={𝑋𝐑𝑛×𝑛𝑣(𝑋)Ker(𝐴(𝐴𝑇))}.

Note that according to Proposition 2.1 the set of matrices 𝐶𝐴 which commute with the square matrix 𝐴 and its complementary 𝐶𝐴 (i.e., the set of matrices which do not commute with 𝐴) can be redefined in an equivalent way by using their given expanded vector forms.

Proposition 2.2. One has rank𝐴𝐴𝑇<𝑛2Ker𝐴𝐴𝑇00𝜎𝐴𝐴𝑇𝑋(0)𝐶𝐴,𝐴𝐑𝑛×𝑛.(2.1)

Proof. One has [𝐴,𝐴]=0;𝐴𝐑𝑛×𝑛𝐑𝑛20𝑣(𝐴)Ker(𝐴(𝐴𝑇)); 𝐴𝐑𝑛×𝑛. As a result, Ker𝐴𝐴𝑇0𝐑𝑛2;𝐴𝐑𝑛×𝑛rank𝐴𝐴𝑇<𝑛2;𝐴𝐑𝑛×𝑛(2.2) so that 0𝜎(𝐴(𝐴𝑇)).
Also, 𝑋(0)𝐑𝑛×𝑛[𝐴,𝑋]=0𝑋𝐶𝐴 since Ker(𝐴(𝐴𝑇))0𝐑𝑛2.
Then, Proposition 2.2 has been proved.

The subsequent mathematical result is stronger than Proposition 2.2 and is based on characterization of the spectrum and eigenspaces of 𝐴(𝐴𝑇).

Theorem 2.3. The following properties hold.
(i) The spectrum of 𝐴(𝐴𝑇) is 𝜎(𝐴(𝐴𝑇))={𝜆𝑖𝑗=𝜆𝑖𝜆𝑗𝜆𝑖,𝜆𝑗𝜎(𝐴);𝑖,𝑗𝑛} and possesses 𝜈 Jordan blocks in its Jordan canonical form of, subject to the constraints 𝑛2𝜈=dim𝑆=(𝜇𝑖=1𝜈𝑖)2𝜈(0), and 0𝜎(𝐴(𝐴𝑇)) with an algebraic multiplicity 𝜇(0) and with a geometric multiplicity 𝜈(0) subject to the constraints: 𝑛2=𝜇𝑖=1𝜇𝑖2𝜇(0)𝜇𝑖=1𝜇2𝑖𝜈(0)=𝜇𝑖=1𝜈2𝑖𝑛,(2.3) where
(a)𝑆=span{𝑧𝑖𝑥𝑗,𝑖,𝑗𝑛}, 𝜇𝑖=𝜇(𝜆𝑖) and 𝜈𝑖=𝜈(𝜆𝑖) are, respectively, the algebraic and the geometric multiplicities of 𝜆𝑖𝜎(𝐴), 𝑖𝑛; 𝜇𝑛 is the number of distinct 𝜆𝑖𝜎(𝐴)(𝑖𝜇), 𝜇𝑖=𝜇(𝜆𝑖𝑗) and 𝜈𝑖𝑗=𝜈(𝜆𝑖𝑗), are, respectively, the algebraic and the geometric multiplicity of 𝜆𝑖𝑗=(𝜆𝑖𝜆𝑗)𝜎(𝐴(𝐴𝑇)), 𝑖,𝑗𝑛; 𝜇𝑛,(b)𝑥𝑗 and 𝑧𝑖 are, respectively, the right eigenvectors of 𝐴 and 𝐴𝑇 with respective associated eigenvalues 𝜆𝑗 and 𝜆𝑖;𝑖,𝑗𝑛.
(ii) One has dimIm𝐴𝐴𝑇=rank𝐴𝐴𝑇=𝑛2𝜈(0)dimKer𝐴𝐴𝑇=𝜈(0);𝐴𝐑𝑛×𝑛.(2.4)

Expressions which calculate the sets of matrices which commute and which do not commute with a given one are obtained in the subsequent result.

Theorem 2.4. The following properties hold.
(i) One has 𝑋𝐶𝐴i𝐴𝐴𝑇𝑣(𝑋)=0𝑋𝐶𝐴𝑣i𝑣(𝑋)=𝐹𝑇𝑋2𝐴𝑇12𝐴𝑇11,𝑣𝑇𝑋2𝑇(2.5) for any 𝑣(𝑋2)Ker(𝐴22𝐴21𝐴111𝐴12), where 𝐸,𝐹𝐑𝑛2×𝑛2 are permutation matrices and 𝑋𝐑𝑛×𝑛 and 𝑣(𝑋)𝐑𝑛2 are defined as follows.
(a)One has 𝑣𝑋=𝐹1𝑣(𝑋),𝐴𝐴𝑇𝐴=𝐸𝐴𝐴𝑇𝐹,𝑋𝐶𝐴,(2.6) where 𝑣(𝑋)=(𝑣𝑇(𝑋1),𝑣𝑇(𝑋2))𝑇𝐑𝑛2 with 𝑣(𝑋1)𝐑𝜈(0) and 𝑣(𝑋2)𝐑𝑛2𝜈(0).(b)The matrix 𝐴11𝐑𝜈(0)×𝜈(0) is nonsingular in the block matrix partition 𝐴=Blockmatrix(𝐴𝑖𝑗;𝑖,𝑗2) with 𝐴12𝐑𝜈(0)×𝑛2, 𝐴21𝐑(𝑛2𝜈(0))×𝜈(0) and 𝐴22𝐑(𝑛2𝜈(0))×(𝑛2𝜈(0)).
(ii) 𝑋𝐶𝐴, for any given 𝐴(0)𝐑𝑛×𝑛, if and only if 𝐴𝐴𝑇𝑣(𝑋)=𝑣(𝑀)(2.7) for some 𝑀(0)𝐑𝑛×𝑛 such that rank𝐴𝐴𝑇=rank𝐴𝐴𝑇,𝑣(𝑀)=𝑛2𝜈(0).(2.8) Also, 𝐶𝐴=𝑋𝐑𝑛×𝑛𝐴𝐴𝑇𝑣(𝑋)=𝑣(𝑀)forany𝑀(0)𝐑𝑛×𝑛satisfyingrank𝐴𝐴𝑇=rank𝐴𝐴𝑇,𝑣(𝑀)=𝑛2.𝜈(0)(2.9) Also, with the same definitions of 𝐸, 𝐹, and 𝑋 in (i), 𝑋𝐶𝐴 if and only if 𝑣𝑣(𝑋)=𝐹𝑇𝑀1𝐴𝑇11𝑣𝑇𝑋2𝐴𝑇12𝐴𝑇11,𝑣𝑇𝑋2𝑇,(2.10) where 𝑣(𝑋2) is any solution of the compatible algebraic system 𝐴22𝐴21𝐴111𝐴12𝑣𝑋2=𝑣𝑀2𝐴21𝐴111𝑣𝑀1(2.11) for some 𝑀(0)𝐑𝑛×𝑛 such that 𝑋,𝑀𝐑𝑛×𝑛 which are defined according to 𝑣(𝑋)=𝐹𝑣(𝑋) and 𝑀=𝐸𝑀𝐹𝑀(0)𝐑𝑛×𝑛 with 𝑣(𝑀)=𝐸𝑣(𝑀)=𝐸(𝑣𝑇1(𝑀),𝑣𝑇2(𝑀))𝑇.

3. Results Concerning Sets of Pair Wise Commuting Matrices

Consider the following sets.

(1)A set of nonzero 𝑝2 distinct pair wise commuting matrices 𝐀𝐶={𝐴𝑖𝐑𝑛×𝑛[𝐴𝑖,𝐴𝑗]=0;𝑖,𝑗𝑝}.(2)The set of matrices MC𝐀𝐶={𝑋𝐑𝑛×𝑛[𝑋,𝐴𝑖]=0;𝐴𝑖𝐀𝐶} which commute with the set 𝐀𝐶 of pair wise commuting matrices.(3)A set of matrices 𝐶𝐀={𝑋𝐑𝑛×𝑛[𝑋,𝐴𝑖]=0;𝐴𝑖𝐀} which commute with a given set of nonzero 𝑝 matrices 𝐀={𝐴𝑖𝐑𝑛×𝑛;𝑖𝑝} which are not necessarily pair wise commuting.

The complementary sets of MC𝐀𝐶 and 𝐶𝐀 are MC𝐀𝐶 and 𝐶𝐀, respectively, so that 𝐑𝑛×𝑛𝐵MC𝐀𝐶 if 𝐵MC𝐀𝐶 and 𝐑𝑛×𝑛𝐵𝐶𝐀 if 𝐵𝐶𝐀. Note that 𝐶𝐀𝐶=MC𝐴𝐶 for a set of pair wise commuting matrices 𝐀𝐶 so that the notation MC𝐴𝐶 is directly referred to a set of matrices which commute with all those in a set of pair wise commuting matrices. The following two basic results are concerned with the commutation and noncommutation properties of two matrices.

Proposition 3.1. The following properties hold. (i)One has 𝐴𝑖𝐀𝐶;𝑖𝐴𝑝𝑣𝑖𝑗(𝑖)𝑝𝐴Ker𝑗𝐴𝑇𝑗;𝑖𝑝𝐴𝑣𝑖𝑖+1𝑗𝑝𝐴Ker𝑗𝐴𝑇𝑗;𝑖𝑝.(3.1)(ii)Define 𝑁𝑖𝐀𝐶𝐴=𝑇1𝐴1𝐴𝑇2𝐴2𝐴𝑇𝑖1𝐴𝑖1𝐴𝑇𝑖+1𝐴𝑖+1𝐴𝑇𝑝𝐴𝑝𝑇𝐑(𝑝1)𝑛2×𝑛2.(3.2) Then 𝐴𝑖𝐀𝐶;𝑖𝑝 if and only if 𝑣(𝐴𝑖)Ker𝑁𝑖(𝐀𝐶);𝑖𝑝.(iii)One has MC𝐀𝐶=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑖𝑝𝐴Ker𝑖𝐴𝑇𝑖;𝐴𝑖𝐀𝐶=𝑋𝐑𝑛×𝑛𝐀𝑣(𝑋)Ker𝑁𝐶𝐶𝐀𝐶𝐀𝐶{0}𝐑𝑛×𝑛,(3.3) where 𝑁(𝐀𝐶)=[𝐴𝑇1(𝐴1)𝐴𝑇2(𝐴2)𝐴𝑇𝑝(𝐴𝑝)]𝑇𝐑𝑝𝑛2×𝑛2,𝐴𝑖𝐀𝐶.(iv)One has MC𝐀𝐶=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑖𝑝𝐴Im𝑖𝐴𝑇𝑖;𝐴𝑖𝐀𝐶=𝑋𝐑𝑛×𝑛𝐀𝑣(𝑋)Im𝑁𝐶.(3.4)(v)One has 𝐶𝐀=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑖𝑝𝐴Ker𝑖𝐴𝑇𝑖;𝐴𝑖=𝐀𝑋𝐑𝑛×𝑛,𝑣(𝑋)Ker𝑁(𝐀)(3.5) where 𝑁(𝐀)=[𝐴𝑇1(𝐴1)𝐴𝑇2(𝐴2)𝐴𝑇𝑝(𝐴𝑝)]𝑇𝐑𝑝𝑛2×𝑛2,𝐴𝑖𝐀.(vi)One has 𝐶𝐀=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑖𝑝𝐴Im𝑖𝐴𝑇𝑖;𝐴𝑖=𝐀𝑋𝐑𝑛×𝑛.𝑣(𝑋)Im𝑁(𝐀)(3.6)

Concerning Proposition 3.1(v)-(vi), note that if 𝑋𝐶𝐀, then 𝑋0 since 𝐑𝑛×𝑛0𝐶𝐀. The following result is related to the rank defectiveness of the matrix 𝑁(𝐀𝐶) and any of their submatrices since 𝐀𝐶 is a set of pair wise commuting matrices.

Proposition 3.2. The following properties hold: 𝑛2𝐀>rank𝑁𝐶rank𝑁𝑖𝐀𝐶𝐴rank𝑗𝐴𝑇𝑗;𝐴𝑗𝐀𝐶;𝑖,𝑗𝑝(3.7) and, equivalently, 𝑁det𝑇𝐀𝐶𝑁𝐀𝐶𝑁=det𝑇𝑖𝐀𝐶𝑁𝑖𝐀𝐶𝐴=det𝑗𝐴𝑇𝑗=0;𝐴𝑗𝐀𝐶;𝑖,𝑗𝑛.(3.8)

Results related to sufficient conditions for a set of matrices to pair wise commute are abundant in literature. For instance, diagonal matrices are always pair wise commuting. Any sets of matrices obtained via multiplication by real scalars with any given arbitrary matrix are sets of pair wise commuting matrices. Any set of matrices obtained by linear combinations of one of the above sets consists also of pair wise commuting matrices. Any matrix commutes with any of its matrix functions, and so forth. In the following, a simple, although restrictive, sufficient condition for rank defectiveness of 𝑁(𝐀) of some set 𝐀 of 𝑝 square real 𝑛-matrices is discussed. Such a condition may be useful as a practical test to elucidate the existence of a nonzero 𝑛-square matrix which commutes with all matrices in this set. Another useful test obtained from the following result relies on a necessary condition to elucidate if the given set consists of pair wise commuting matrices.

Theorem 3.3. Consider any arbitrary set of nonzero 𝑛-square real matrices 𝐀={𝐴1,𝐴2,,𝐴𝑝} for any integer 𝑝1 and define matrices: 𝑁𝑖(𝐴𝐀)=𝑇1𝐴1𝐴𝑇2𝐴2𝐴𝑇𝑖1𝐴𝑖1𝐴𝑇𝑖+1𝐴𝑖+1𝐴𝑇𝑝𝐴𝑝𝑇,𝐴𝑁(𝐀)=𝑇1𝐴1𝐴𝑇2𝐴2𝐴𝑇𝑝𝐴𝑝𝑇.(3.9) Then, the following properties hold: (i)rank(𝐴𝑖(𝐴𝑖))rank𝑁𝑖(𝐀)rank𝑁(𝐀)<𝑛2;𝑖𝑝.(ii)𝑖𝑝Ker(𝐴𝑖(𝐴𝑇𝑖)){0} so that 𝑋(0)𝐶𝐀,𝑋𝐶𝐀𝑣(𝑋)𝑖𝑝𝐴Ker𝑖𝐴𝑇𝑖,𝑋𝐶𝐀𝑣(𝑋)𝑖𝑝𝐴Im𝑖𝐴𝑇𝑖.(3.10)(iii)If 𝐀=𝐀𝐶 is a set of pair wise commuting matrices then 𝑣𝐴𝑖𝑗𝑝𝑖𝐴Ker𝑗𝐴𝑇𝑗;𝑖𝑝𝐴𝑣𝑖𝑖𝑝𝐴Ker𝑖𝐴𝑇𝑖;𝑖𝑝𝐴𝑣𝑖𝑖𝑝{𝑖}𝐴Ker𝑖𝐴𝑇𝑖;𝑖𝑝.(3.11)(iv)One has M𝐀𝐶=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑖𝑝𝐴Ker𝑖𝐴𝑇𝑖,𝐴𝑖𝐀𝐶𝐀𝐶{0}𝐑𝑛×𝑛(3.12) with the above set inclusion being proper.

Note that Theorem 3.3(ii) extends Proposition 3.1(v) since it is proved that 𝐶𝐀{0} because all nonzero 𝐑𝑛×𝑛Λ=diag(𝜆𝜆𝜆)𝐶𝐀 for any 𝐑𝜆0 and any set of matrices 𝐀. Note that Theorem 3.3(iii) establishes that 𝑣(𝐴𝑖)𝑖𝑝{𝑖}Ker(𝐴𝑗(𝐴𝑇𝑗));𝑖𝑝 is a necessary and sufficient condition for the set to be a set of commuting matrices 𝐀 being simpler to test (by taking advantage of the symmetry property of the commutators) than the equivalent condition 𝑣(𝐴𝑖)𝑖𝑝Ker(𝐴𝑗(𝐴𝑇𝑗));𝑖𝑝. Further results about pair wise commuting matrices or the existence of nonzero commuting matrices with a given set are obtained in the subsequent result based on the Kronecker sum of relevant Jordan canonical forms.

Theorem 3.4. The following properties hold for any given set of 𝑛-square real matrices 𝐀={𝐴1,𝐴2,,𝐴𝑝}.
(i) The set 𝐶𝐀 of matrices 𝑋𝐑𝑛×𝑛 which commute with all matrices in 𝐀 is defined by: 𝐶𝐀=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑝𝑖=1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖𝑃𝑖1𝑃𝑖𝑇=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑝𝑖=1𝑃Im𝑖𝑃𝑖1𝑌𝑖𝑌𝑖𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖;𝑖𝑝=𝑋𝐑𝑛×𝑛𝑣(𝑋)𝑝𝑖=1𝑃Im𝑖𝑃𝑖1(𝑌),𝑌𝑝𝑖=1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖,(3.13) where 𝑃𝑖𝐑𝑛×𝑛 is a nonsingular transformation matrix such that 𝐴𝑖𝐽𝐴𝑖=𝑃𝑖1𝐴𝑖𝑃𝑖, 𝐽𝐴𝑖 being the Jordan canonical form of 𝐴𝑖.
(ii) One has dimspan𝑣(𝑋)𝑋𝐶𝐀min𝑖𝑝𝐽dimKer𝐴𝑖𝐽𝑇𝐴𝑖=min𝑖𝑝𝜈𝑖(0)=min𝑖𝑝𝜌𝑖𝑗=1𝜈2𝑖𝑗min𝑖𝑝𝜌𝑖𝑖=1𝜇2𝑖𝑗min𝑖𝑝𝜇𝑖,(0)(3.14) where 𝜈𝑖(0) and 𝜈𝑖𝑗 are, respectively, the geometric multiplicities of 0𝜎(𝐴𝑖(𝐴𝑇𝑖)) and 𝜆𝑖𝑗𝜎(𝐴𝑖) and 𝜇𝑖(0) and 𝜇𝑖𝑗 are, respectively, the algebraic multiplicities of 0𝜎(𝐴𝑖(𝐴𝑇𝑖)) and 𝜆𝑖𝑗𝜎(𝐴𝑖); 𝑗𝜌𝑖 (the number of distinct eigenvalues of 𝐴𝑖), 𝑖𝑝.
(iii) The set 𝐀 consists of pair wise commuting matrices, namely 𝐶𝐀=MC𝐀, if and only if 𝑣(𝐴𝑗)𝑝𝑖(𝑗)=1(Ker[(𝐽𝐴𝑖(𝐽𝑇𝐴𝑖))(𝑃𝑖1𝑃𝑖𝑇)]); 𝑗𝑝. Equivalent conditions follow from the second and third equivalent definitions of 𝐶𝐀 in Property (i).

Theorems 3.3 and 3.4 are concerned with MC𝐀{0}𝐑𝑛×𝑛 for an arbitrary set of real square matrices A and for a pair wise-commuting set, respectively.

4. Further Results and Extensions

The extensions of the results for commutation of complex matrices are direct in several ways. It is first possible to decompose the commutator in its real and imaginary part and then apply the results of Sections 2 and 3 for real matrices to both parts as follows. Let 𝐴=𝐴re+𝐢𝐴im and 𝐵=𝐵re+𝐢𝐵im be complex matrices in 𝐂𝑛×𝑛 with 𝐴re and 𝐵re being their respective real parts, and 𝐴im and 𝐵im, all in 𝐑𝑛×𝑛, their respective imaginary parts, and 𝐢=1 is the imaginary complex unity. Direct computations with the commutator of 𝐴 and 𝐵 yield []=𝐴𝐴,𝐵re,𝐵re𝐴im,𝐵im𝐴+𝐢im,𝐵re+𝐴re,𝐵im.(4.1) The following three results are direct and allow to reduce the problem of commutation of a pair of complex matrices to the discussion of four real commutators.

Proposition 4.1. One has 𝐵𝐶𝐴(([𝐴re,𝐵re]=[𝐴im,𝐵im])([𝐴im,𝐵re]=[𝐵im,𝐴re])).

Proposition 4.2. One has (𝐵re(𝐶𝐴re𝐶𝐴im)𝐵im(𝐶𝐴im𝐶𝐴re))𝐵𝐶𝐴.

Proposition 4.3. One has (𝐴re(𝐶𝐵re𝐶𝐵im)𝐴im(𝐶𝐵im𝐶𝐵re))𝐵𝐶𝐴.

Proposition 4.1 yields to the subsequent result.

Theorem 4.4. The following properties hold.
(i) Assume that the matrices 𝐴 and 𝐵re are given. Then, 𝐵𝐶𝐴 if and only if 𝐵im satisfies the following linear algebraic equation: 𝐴re𝐴𝑇re𝐴im𝐴𝑇im𝑣𝐵re=𝐴im𝐴𝑇im𝐴re𝐴𝑇re𝑣𝐵im(4.2) for which a necessary condition is 𝐴rankim𝐴𝑇im𝐴re𝐴𝑇re𝐴=rankim𝐴𝑇im𝐴re𝐴𝑇re𝐴re𝐴𝑇re𝐴im𝐴𝑇im𝑣𝐵re.(4.3)
(ii) Assume that the matrices 𝐴 and 𝐵im𝑒 are given. Then, 𝐵𝐶𝐴 if and only if 𝐵re satisfies (4.2) for which a necessary condition is 𝐴rankre𝐴𝑇re𝐴im𝐴𝑇im𝐴=rankre𝐴𝑇re𝐴im𝐴𝑇im𝐴im𝐴𝑇im𝐴re𝐴𝑇re𝑣𝐵im.(4.4)
(iii) Also, 𝐵0 such that 𝐵𝐶𝐴 with 𝐵re=0 and 𝐵0 such that 𝐵𝐶𝐴 with 𝐵im=0.

A more general result than Theorem 4.4 is the following.

Theorem 4.5. The following properties hold.
(i) 𝐵𝐶𝐴𝐂𝑛×𝑛 if and only if 𝑣(𝐵) is a solution to the following linear algebraic system: 𝐴re𝐴𝑇re𝐴im𝐴𝑇im𝐴im𝐴𝑇im𝐴re𝐴𝑇re𝑣𝐵re𝑣𝐵im=0.(4.5) Nonzero solutions 𝐵𝐶𝐴, satisfying 𝑣𝐵re𝑣𝐵im𝐴Kerre𝐴𝑇re𝐴im𝐴𝑇im𝐴im𝐴𝑇im𝐴re𝐴𝑇re,(4.6) always exist since 𝐴Kerre𝐴𝑇re𝐴im𝐴𝑇im𝐴im𝐴𝑇im𝐴re𝐴𝑇re{0}𝐑2𝑛2,(4.7) and equivalently, since 𝐴rankre𝐴𝑇re𝐴im𝐴𝑇im𝐴im𝐴𝑇im𝐴re𝐴𝑇re<2𝑛2.(4.8)
(ii) Property (ii) is equivalent to 𝐵𝐶𝐴𝐴𝐴𝑣(𝐵)=0(4.9) which has always nonzero solutions since rank(𝐴(𝐴))<𝑛2.

The various results of Section 3 for a set of distinct complex matrices to pair wise commute and for characterizing the set of complex matrices which commute with those in a given set may be discussed by more general algebraic systems like the above one with four block matrices 𝐴𝑗re𝐴𝑇2re𝐴𝑗im𝐴𝑇𝑗im𝐴𝑗im𝐴𝑇2im𝐴𝑗2re𝐴𝑇𝑗re(4.10) for each 𝑗𝑝 in the whole algebraic system. Theorem 4.5 extends directly for sets of complex matrices commuting with a given one and complex matrices commuting with a set of commuting complex matrices as follows.

Theorem 4.6. The following properties hold.
(i) Consider the sets of nonzero distinct complex matrices 𝐀={𝐴𝑖𝐂𝑛×𝑛𝑖𝑝} and 𝐶𝐀={𝑋𝐂𝑛×𝑛[𝑋,𝐴𝑖]=0;𝐴𝑖𝐀,𝑖𝑝} for 𝑝2. Thus, 𝐶𝐀𝑋=𝑋re+𝐢𝑋re if and only if 𝐴1re𝐴𝑇1re𝐴1im𝐴𝑇1im𝐴1im𝐴𝑇1im𝐴1re𝐴𝑇1re𝐴2re𝐴𝑇2re𝐴2im𝐴𝑇2im𝐴2im𝐴𝑇2im𝐴2re𝐴𝑇2re𝐴𝑝re𝐴𝑇𝑝re𝐴𝑝im𝐴𝑇𝑝im𝐴𝑝im𝐴𝑇𝑝im𝐴𝑝re𝐴𝑇𝑝re𝑣𝑋re𝑣𝑋im=0,(4.11) and a nonzero solution 𝑋𝐶𝐀 exists since the rank of the coefficient matrix of (4.11) is less than 2𝑛2.
(ii) Consider the sets of nonzero distinct commuting complex matrices 𝐀𝐶={𝐴𝑖𝐂𝑛×𝑛𝑖𝑝} and MC𝐀={𝑋𝐂𝑛×𝑛[𝑋,𝐴𝑖]=0;𝐴𝑖𝐀,𝑖𝑝} for 𝑝2. Thus, MC𝐀𝑋=𝑋re+𝐢𝑋re if and only if 𝑣(𝑋r𝑒) and 𝑣(𝑋im) are solutions to (4.11).
(iii) Properties (i) and (ii) are equivalently formulated by from the algebraic set of complex equations: 𝐴1𝐴1𝐴2𝐴2𝐴𝑝𝐴𝑝𝑣(𝑋)=0.(4.12)

Remark 4.7. Note that all the proved results of Sections 2 and 3 are directly extendable for complex commuting matrices, by simple replacements of transposes by conjugate transposes, without requiring a separate decomposition in real and imaginary parts as discussed in Theorems 4.5(ii) and 4.6(iii).

Let 𝑓𝐂𝐂 be an analytic function in an open set 𝐷𝜎(𝐴) for some matrix 𝐴𝐂𝑛×𝑛 and let 𝑝(𝜆) be a polynomial fulfilling 𝑝(𝑖)(𝜆𝑘)=𝑓(𝑖)(𝜆𝑘); 𝑘𝜎(𝐴), 𝑖𝑚𝑘1{0}; 𝑘𝜇 (the number of distinct elements in 𝜎(𝐴)), where 𝑚𝑘 is the index of 𝜆𝑘, that is, its multiplicity in the minimal polynomial of 𝐴. Then, 𝑓(𝐴) is a function of a matrix 𝐴 if 𝑓(𝐴)=𝑝(𝐴), [8]. Some results follow concerning the commutators of functions of matrices.

Theorem 4.8. Consider a nonzero matrix 𝐵𝐶𝐴𝐂𝑛×𝑛 for any given nonzero 𝐴𝐂𝑛×𝑛. Then, 𝑓(𝐵)𝐶𝐴𝐂𝑛×𝑛, and equivalently 𝑣(𝑓(𝐵))Ker(𝐴(𝐴)), for any function 𝑓𝐂𝑛×𝑛𝐂𝑛×𝑛 of the matrix 𝐵.

The following corollaries are direct from Theorem 4.8 from the subsequent facts:

(1)𝐴𝐶𝐴;𝐴𝐂𝑛×𝑛,(2)One has [][][]=[]=𝐴,𝐵=0𝐴,𝑔(𝐵)=0𝑓(𝐴),𝑔(𝐵)𝑝(𝐴),𝑔(𝐵)𝜇𝑖=0𝛼𝑖𝐴𝑖=,𝑔(𝐵)𝜇𝑖=0𝛼𝑖𝐴𝑖1[]𝐴,𝑔(𝐵)=0𝑔(𝐵)𝐶𝑓(𝐴)𝐂𝑛×𝑛,(4.13) where 𝑓(𝐴)=𝑝(𝐴), from the definition of 𝑓 being a function of the matrix 𝐴, with 𝑝(𝜆) being a polynomial fulfilling 𝑝(𝑖)(𝜆𝑘)=𝑓(𝑖)(𝜆𝑘); 𝑘𝜎(𝐴), 𝑖𝑚𝑘1{0}; 𝑘𝜇 (the number of distinct elements in 𝜎(𝐴)), where 𝑚𝑘 is the index of 𝜆𝑘, that is, its multiplicity in the minimal polynomial of 𝐴.(3)Theorem 4.8 is extendable for any countable set {𝑓𝑖(𝐵)} of matrix functions of 𝐵.

Corollary 4.9. Consider a nonzero matrix 𝐵𝐶𝐴𝐂𝑛×𝑛 for any given nonzero 𝐴𝐂𝑛×𝑛. Then, 𝑔(𝐵)𝐶𝑓(𝐴)𝐂𝑛×𝑛 for any function 𝑓𝐂𝑛×𝑛𝐂𝑛×𝑛 of the matrix 𝐴 and any function 𝑔𝐂𝑛×𝑛𝐂𝑛×𝑛 of the matrix 𝐵.

Corollary 4.10. 𝑓(𝐴)𝐶𝐴𝐂𝑛×𝑛, and equivalently 𝑣(𝑓(𝐴))Ker(𝐴(𝐴)), for any function 𝑓𝐂𝑛×𝑛𝐂𝑛×𝑛 of the matrix 𝐴.

Corollary 4.11. If 𝐵𝐶𝐴𝐂𝑛×𝑛 then any countable set of function matrices {𝑓𝑖(𝐵)} is 𝐶𝐴 and in MC𝐴.

Corollary 4.12. Consider any countable set of function matrices 𝐶𝐹={𝑓𝑖(𝐴);𝑖𝑝}𝐶𝐴 for any given nonzero 𝐴𝐂𝑛×𝑛. Then, 𝑓𝑖𝐶𝐹(Ker(𝑓𝑖(𝐴)(𝑓𝑖(𝐴))))Ker(𝐴(𝐴)).

Note that matrices which commute and are simultaneously triangularizable through the same similarity transformation maintain a zero commutator after such a transformation is performed.

Theorem 4.13. Assume that 𝐵𝐶𝐴𝐂𝑛×𝑛. Thus, Λ𝐵𝐶Λ𝐴𝐂𝑛×𝑛 provided that there exists a nonsingular matrix 𝑇𝐂𝑛×𝑛 such that Λ𝐴=𝑇1𝐴𝑇 and Λ𝐵=𝑇1𝐵𝑇.

A direct consequence of Theorem 4.13 is that if a set of matrices are simultaneously triangularizable to their real canonical forms by a common transformation matrix then the pair wise commuting properties are identical to those of their respective Jordan forms.

Appendices

A. Proofs of the Results of Section 2

Proof of Proposition 2.1. (i)-(ii) First note by inspection that 𝐶𝐴{0,𝐴}; 𝐴𝐑𝑛×𝑛. Also, []𝐴,𝑋=𝐴𝑋𝑋𝐴=𝐴𝐼𝑛𝐼𝑛𝐴𝑇𝑣=(𝑋)𝐴𝐴𝑇𝑣(𝑋)=0𝑣(𝑋)Ker𝐴𝐴𝑇(A.1) and Proposition 2.1(i)-(ii) has been proved since there is an isomorphism 𝑓𝐑𝑛2𝐑𝑛×𝑛 defined by 𝑓(𝑣(𝑋))=𝑋; 𝑋𝐑𝑛×𝑛 for 𝑣(𝑋)=(𝑥𝑇1,𝑥𝑇2,,𝑥𝑇𝑛)𝑇𝐑𝑛2 if 𝑥𝑇𝑖=(𝑥𝑖1,𝑥𝑖2,,𝑥𝑖𝑛) is the ith row of the square matrix 𝑋.
(iii) It is a direct consequence of Proposition 2.1(iii) and the symmetry property of the commutator of two commuting matrices 𝐵𝐶𝐴[𝐴,𝐵]=[𝐵,𝐴]=0𝐴𝐶𝐵.

Proof of Proposition 2.2. [𝐴,𝐴]=0;𝐴𝐑𝑛×𝑛𝐑𝑛20𝑣(𝐴)Ker(𝐴(𝐴𝑇)); 𝐴𝐑𝑛×𝑛. As a result, Ker𝐴𝐴𝑇0𝐑𝑛2;𝐴𝐑𝑛×𝑛rank𝐴𝐴𝑇<𝑛2;𝐴𝐑𝑛×𝑛(A.2) so that 0𝜎(𝐴(𝐴𝑇)).
Also, 𝑋(0)𝐑𝑛×𝑛[𝐴,𝑋]=0𝑋𝐶𝐴 since Ker(𝐴(𝐴𝑇))0𝐑𝑛2.
Proposition 2.2 has been proved.

Proof of Theorem 2.3. (i) Note that 𝜎𝐴(𝐴)=𝜎𝑇𝜎𝐴𝐴𝑇=𝐂𝜂=𝜆𝑘𝜆;𝜆𝑘,𝜆𝜎(𝐴);𝑘,𝑛=𝜎0𝐴𝐴𝑇𝜎0𝐴𝐴𝑇,(A.3) where 𝜎0𝐴𝐴𝑇=𝜆𝜎𝐴𝐴𝑇,𝜆=0𝜎0𝐴𝐴𝑇=𝜆𝜎𝐴𝐴𝑇𝜆0=𝜎𝐴𝐴𝑇𝜎0𝐴𝐴𝑇.(A.4) Furthermore, 𝜎(𝐴(𝐴𝑇))={𝐂𝜆=𝜆𝑗𝜆𝑖𝜆𝑖,𝜆𝑗𝜎(𝐴);𝑖,𝑗𝑛} and 𝑧𝑖𝑥𝑗 is a right eigenvector of 𝐴(𝐴𝑇) associated with its eigenvalue 𝜆𝑗𝑖=𝜆𝑗𝜆𝑖. 𝜆=𝜆𝑗𝜆𝑖𝜎(𝐴(𝐴𝑇)) has algebraic and geometric multiplicities 𝜇𝑗𝑖 and 𝜈𝑗𝑖, respectively; 𝑖,𝑗𝑛, since 𝑥𝑗 and 𝑧𝑖 are, respectively, the right eigenvectors of 𝐴 and 𝐴𝑇 with associated eigenvalues 𝜆𝑗 and 𝜆𝑖;𝑖,𝑗𝑛.
Let 𝐽𝐴 be the Jordan canonical form of 𝐴. It is first proved that there exists a nonsingular 𝑇𝐑𝑛2×𝑛2 such that 𝐽𝐴(𝐽𝐴𝑇)=𝑇1(𝐴(𝐴𝑇))𝑇. The proof is made by direct verification by using the properties of the Kronecker product, with 𝑇=𝑃𝑃𝑇 for a nonsingular 𝑃𝐑𝑛×𝑛 such that 𝐴𝐽𝐴=𝑃1𝐴𝑃, as follows: 𝑇1𝐴𝐴𝑇𝑇=𝑃𝑃𝑇1𝐴𝐼𝑛𝑃𝑃𝑇𝑃𝑃𝑇1𝐼𝑛𝐴𝑇𝑃𝑃𝑇=𝑃1𝑃𝐴𝑃𝑇𝐼𝑛𝑃𝑇𝑃1𝐼𝑛𝑃𝑃𝑇𝐴𝑇𝑃𝑇=𝑃1𝐴𝑃𝐼𝑛𝐼𝑛𝑃𝑇𝐴𝑇𝑃𝑇=𝐽𝐴𝐼𝑛𝐼𝑛𝐽𝐴𝑇=𝐽𝐴𝐼𝑛+𝐼𝑛𝐽𝐴𝑇=𝐽𝐴𝐽𝐴𝑇(A.5) and the result has been proved. Thus, rank(𝐴(𝐴𝑇))=rank(𝐽𝐴(𝐽𝐴𝑇)). It turns out that 𝑃 is, furthermore, unique except for multiplication by any nonzero real constant. Otherwise, if 𝑇𝑃𝑃𝑇, then there would exist a nonsingular 𝑄𝐑𝑛×𝑛 with 𝑄𝛼𝐼𝑛;𝛼𝐑 such that 𝑇=𝑄(𝑃𝑃𝑇)1𝑄 so that 𝑇1(𝐴(𝐴𝑇))𝑇𝐽𝐴(𝐽𝐴𝑇) provided that 𝑃𝑃𝑇1𝐴𝐴𝑇𝑃𝑃𝑇=𝐽𝐴𝐽𝐴𝑇.(A.6) Thus, note that card𝜎𝐴𝐴𝑇=𝑛2=𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇𝑖2𝜇(0)=𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇2𝑖𝜈𝜈(0)=𝜇𝑖=1𝜈𝑖𝑖=𝜇𝑖=1𝜈2𝑖=𝜇𝜇𝑖=1𝑗=1𝜈𝑖𝑗22𝜇𝑖=10𝑥0200𝑑𝜇𝑗(𝑖)=1𝜈𝑖𝑗=𝜈2𝜇𝑖=10𝑥0200𝑑𝜇𝑗(𝑖)=1𝜈𝑖𝑗𝑛.(A.7) Those results follow directly from the properties of the Kronecker sum 𝐴𝐵 of n-square real matrices 𝐴 and 𝐵=𝐴𝑇 since direct inspection leads to the following.(1)0𝜎(𝐴(𝐴𝑇)) with algebraic multiplicity 𝜇(0)𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇2𝑖𝜇𝑖=1𝜈2𝑖𝑛 since there are at least 𝑛𝑖=1𝜇2𝑖 zeros in 𝜎(𝐴(𝐴𝑇)) (i.e., the algebraic multiplicity of 0𝜎(𝐴(𝐴𝑇)) is at least 𝑛𝑖=1𝜇2𝑖) and since 𝜈𝑖1; 𝑖𝑛. Also, a simple computation of the number of eigenvalues of 𝐴(𝐴𝑇) yields card𝜎(𝐴(𝐴𝑇))=𝑛2=𝜇𝑖=1𝜇𝑖𝑖=(𝜇𝑖=1𝜇𝑖)2.(2)The number of linearly independent vectors in 𝑆 is 𝜈=𝜇𝑖=1𝜇𝑗=1𝜈𝑖𝑗=(𝜇𝑖=1𝜈𝑖)2𝜇𝑖=1𝜈𝑖𝑖=𝜇𝑖=1𝜈2𝑖 since the total number of Jordan blocks in the Jordan canonical form of 𝐴 is 𝜇𝑖=1𝜈𝑖.(3)The number of Jordan blocks associated with 0𝜎(𝐴(𝐴𝑇)) in the Jordan canonical form of (𝐴(𝐴𝑇)) is 𝜈(0)=𝜇𝑖=1𝑣2𝑖𝜈, with 𝜈𝑖𝑖=𝜈2𝑖𝑖; 𝑖𝑛. Thus, card𝜎0𝐴𝐴𝑇=𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇2𝑖,card𝜎0𝐴𝐴𝑇=𝑛2𝜇𝑖=1𝜇2𝑖,rank𝐴𝐴𝑇=𝑛2𝜈(0)=𝑛2𝜇𝑖=1𝑣2𝑖,dimKer𝐴𝐴𝑇=𝜈(0)=𝜇𝑖=1𝑣2𝑖.(A.8)(4)There are at least 𝜈(0) linearly independent vectors in 𝑆=span{𝑧𝑖𝑥𝑗,𝑖,𝑗𝑛}. Also, the total number of Jordan blocks in the Jordan canonical form of (𝐴(𝐴𝑇)) is 𝜈=dim𝑆=(𝜇𝑖=1𝜇𝑗=1𝜈𝑖𝑗)=(𝜇𝑖=1𝜈𝑖)2=𝜈(0)+2𝜇𝑖=1𝜇𝑗(𝑖)=1𝜈𝑖𝑗𝜈(0).
Property (i) has been proved. Property (ii) follows directly from the orthogonality in 𝐑𝑛2of its range and null subspaces.

Proof of Theorem 2.4. First note from Proposition 2.1 that 𝑋𝐶𝐴 if and only if (𝐴(𝐴𝑇))𝑣(𝑋)=0 since 𝑣(𝑋)Ker(𝐴(𝐴𝑇)). Note also from Proposition 2.1 that 𝑋𝐶𝐴 if and only if 𝑣(𝑋)Im(𝐴(𝐴𝑇)). Thus, 𝑋𝐶𝐴 if and only if 𝑣(𝑋) is a solution to the algebraic compatible linear system: 𝐴𝐴𝑇𝑣(𝑋)=𝑣(𝑀)(A.9) for any 𝑀(0)𝐑𝑛×𝑛 such that rank𝐴𝐴𝑇=rank𝐴𝐴𝑇,𝑣(𝑀)=𝑛2𝜈(0).(A.10) From Theorem 2.3, the nullity and the rank of 𝐴(𝐴𝑇) are, respectively, dimKer(𝐴(𝐴𝑇))=𝜈(0)rank(𝐴(𝐴𝑇))=𝑛2𝜈(0). Therefore, there exist permutation matrices 𝐸,𝐹𝐑𝑛2×𝑛2 such that there exists an equivalence transformation: 𝐴𝐴𝑇𝐴=𝐸𝐴𝐴𝑇𝐹=Blockmatrix𝐴𝑖𝑗;𝑖,𝑗2(A.11) such that 𝐴11 is square nonsingular and of order 𝜈0. Define 𝑀=𝐸𝑀𝐹𝑀(0)𝐑𝑛×𝑛. Then, the linear algebraic systems (𝐴(𝐴𝑇))𝑣(𝑋)=𝑣(𝑀), and 𝐸𝐴𝐴𝑇𝐹𝑣𝑋=𝐴11𝐴12𝐴21𝐴22𝑣𝑋1𝑣𝑋2=𝑣𝑀1𝑣𝑀2,𝑉𝑋1=𝐴111𝑣𝑀1𝐴12𝑣𝑋2𝐴22𝐴21𝐴111𝐴12𝑉𝑋2=𝑣𝑀2𝐴21𝐴111𝑣𝑀1(A.12) are identical if 𝑋 and 𝑀 are defined according to 𝑣(𝑋)=𝐹𝑣(𝑋) and 𝑣(𝑀)=𝐸𝑣(𝑀). As a result, Properties (i) and (ii) follow directly from (A.12) for 𝑀=𝑀=0 and for any 𝑀 satisfying rank(𝐴(𝐴𝑇))=rank(𝐴(𝐴𝑇),𝑣(𝑀))=𝑛2𝜈(0), respectively.

B. Proofs of the Results of Section 3

Proof of Proposition 3.1. (i) The first part of Property (i) follows directly from Proposition 2.1 since all the matrices of 𝐀𝐶 pair wise commute and any arbitrary matrix commutes with itself (thus 𝑗=𝑖 may be removed from the intersections of kernels of the first double sense implication). The last part of Property (i) follows from the antisymmetric property of the commutator [𝐴𝑖,𝐴𝑗]=[𝐴𝑗,𝐴𝑖]=0;𝐴𝑖,𝐴𝑗𝐀𝐶 what implies 𝐴𝑖𝐀𝐶;𝑖𝑝𝑣(𝐴𝑖)𝑖+1𝑗𝑝Ker(𝐴𝑗(𝐴𝑇𝑗));𝐴𝑖,𝐴𝑗𝐀𝐶.
(ii) It follows from its equivalence with Property (i) since Ker𝑁𝑖(𝐀𝐶)𝑗(𝑖)𝑝Ker(𝐴𝑗(𝐴𝑇𝑗)).
(iii) Property (iii) is similar to Property (i) for the whose set 𝑀𝐀𝐶 of matrices which commute with the set 𝐀𝐶 so that it contains 𝐀𝐶 and, furthermore, Ker𝑁(𝐀𝐶)𝑖𝑝Ker(𝐴𝑖(𝐴𝑇𝑖)).
(iv) It follows from 𝑗𝑝Im(𝐴𝑗(𝐴𝑇𝑗))=𝑗𝑝Ker(𝐴𝑗(𝐴𝑇𝑗));𝐴𝑗𝐀𝐶 and 𝐑𝑛20Ker(𝐴𝑗(𝐴𝑇𝑗))Im(𝐴𝑗(𝐴𝑇𝑗)) but 𝐑𝑛×𝑛𝑋=0 commutes with any matrix in 𝐑𝑛×𝑛 so that 𝐑𝑛×𝑛0MC𝐀𝐶𝐑𝑛×𝑛0MC𝐀𝐶 for any given 𝐀𝐶.
(v) and (vi) are similar to (ii)–(iv) except that the members of 𝐀 do not necessarily commute.

Proof of Proposition 3.2. It is a direct consequence from Proposition 3.1(i)-(ii) since the existence of nonzero pair wise commuting matrices (all the members of 𝐀𝐶) implies that the above matrices 𝑁(𝐀𝐶),𝑁𝑖(𝐀𝐶),𝐴𝑗(𝐴𝑇𝑗) are all rank defective and have at least identical number of rows than that of columns. Therefore, the square matrices 𝑁𝑇(𝐀𝐶)𝑁(𝐀𝐶),𝑁𝑇𝑖(𝐀𝐶)𝑁𝑖(𝐀𝐶), and 𝐴𝑗(𝐴𝑇𝑗) are all singular.

Proof of Theorem 3.3. (i) Any nonzero matrix Λ=diag(𝜆𝜆𝜆), 𝜆(0)𝐑 is such that Λ(0)𝐶𝐴𝑖(𝑖𝑝) so that Λ𝐶𝐀. Thus, 0𝑣(Λ)Ker𝑁(𝐀)𝑛2>𝑟𝑎𝑛𝑘𝑁(𝐀)rank𝑁𝑖(𝐀)rank(𝐴𝑖(𝐴𝑖)); 𝑖𝑝 and any given set 𝐀. Property (i) has been proved.
(ii) The first part follows by contradiction. Assume 𝑖𝑝Ker(𝐴𝑖(𝐴𝑇𝑖))={0} then 0𝑣(Λ)Ker𝑁(𝐀) so that Λ=diag(𝜆𝜆𝜆)𝐶𝐀, for any 𝜆(0)𝐑 what contradicts (i). Also, 𝑋𝐶𝐴𝑖𝑣(𝑋)Ker(𝐴𝑖(𝐴𝑇𝑖)); 𝑖𝑝 so that 𝑋𝐶𝐀𝑣(𝑋)𝑖𝑝Ker(𝐴𝑖(𝐴𝑇𝑖)) what is equivalent to its contrapositive logic proposition 𝑋𝐶𝐀𝑣(𝑋)𝑖𝑝Im(𝐴𝑖(𝐴𝑇𝑖)).
(iii) Let 𝐀=𝐀𝐶𝐴𝑖𝐶𝐴𝑗;𝑗(𝑖)𝑝,𝑖𝑝𝐴𝑖𝐶𝐴𝑗;𝑗,𝑖𝑝 since 𝐴𝑖𝐶𝐴𝑖;𝑖𝑝𝑣𝐴𝑖𝑖𝑝𝐴Ker𝑗𝐴𝑇𝑗;𝑖𝐴𝑝𝑣𝑖𝑖𝑝{𝑖}𝐴Ker𝑗𝐴𝑇𝑗;𝑖𝑝.(B.1) On the other hand, 𝑣𝐴𝑖𝑗𝑝𝑖𝐴Ker𝑗𝐴𝑇𝑗𝐴𝑣𝑖𝐶𝐴𝑗;𝑗𝑝forany𝑖(<𝑝)𝑝.(B.2) This assumption implies directly that 𝑣𝐴𝑖𝐶𝐴𝑗;𝑗𝐴𝑝𝑣𝑖+1𝑗𝑖+1𝐶𝐴𝑗forany𝑖(<𝑝)𝑝(B.3) which together with 𝑣(𝐴𝑖+1)𝑗𝑝𝑖+1Ker(𝐴𝑗(𝐴𝑇𝑗)) implies that 𝑣𝐴𝑖+1𝐶𝐴𝑗;𝑗𝑣𝐴𝑝𝑖+1𝑗𝑝𝑖+1𝐴Ker𝑗𝐴𝑇𝑗for(𝑖+1)𝑝.(B.4) Thus, it follows by complete induction that 𝐀=𝐀𝐶𝑣(𝐴𝑖)𝑖𝑝{𝑖}Ker(𝐴𝑗(𝐴𝑇𝑗));𝑖𝑝 and Property (iii) has been proved.
(iv) The definition of M𝐀𝐶 follows from Property (iii) in order to guarantee that [𝑋,𝐴𝑖]=0; 𝐴𝑖𝐀. The fact that such a set contains properly 𝐀𝐶{0} follows directly from 𝐑𝑛×𝑛Λ=diag(𝜆𝜆𝜆)(MC𝐀𝐶)𝐀𝐶{0} for any 𝐑𝜆0.

Proof of Theorem 3.4. If 𝐴𝑖𝐽𝐴𝑖=𝑃𝑖1𝐴𝑖𝑃𝑖, with 𝐽𝐴𝑖 being the Jordan canonical form of 𝐴𝑖 then 𝐴𝑖(𝐴𝑇𝑖)𝐽𝐴𝑖(𝐽𝑇𝐴𝑖)=𝑇𝑖1(𝐴𝑖(𝐴𝑇𝑖))𝑇𝑖 with 𝑇𝑖=𝑃𝑖𝑃𝑇𝑖𝐑𝑛2×𝐑𝑛2 (see proof of Theorem 2.3) being nonsingular; 𝑖𝑝. Thus, (𝐴𝑖(𝐴𝑇𝑖))=𝑇𝑖(𝐽𝐴𝑖(𝐽𝑇𝐴𝑖))𝑇𝑖1 so that: 𝐴𝑁(𝐀)=𝑇1𝐴1𝐴𝑇2𝐴2𝐴𝑇𝑝𝐴𝑝𝑇=𝐼𝑝𝑛2𝑊𝑈𝑇1𝑊𝑇2𝑇=𝐓𝐉𝐓𝐚,(B.5) where 𝑇𝐓=BlockDiag1𝑇2𝑇𝑝𝐑𝑝𝑛2×𝑝𝑛2,𝐓𝐚𝑇=1𝑇𝑇2𝑇𝑇𝑝𝑇𝑇𝐑𝑝𝑛2×𝑛2,𝐽𝐉=BlockDiag𝐴1𝐽𝑇𝐴1𝐽𝐴2𝐽𝑇𝐴2𝐽𝐴𝑝𝐽𝑇𝐴𝑝𝐑𝑝𝑛2×𝑝𝑛2.(B.6) Then, Ker𝑁(𝐀)=𝑝𝑖=1𝐴Ker𝑖𝐴𝑇𝑖=Ker𝐉𝐓𝐚𝑝𝑖=1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖𝑃𝑖1𝑃𝑖𝑇(B.7) since 𝐓 is nonsingular. Thus, 𝑋Dom(𝐀)𝐑𝑛2: 𝑋𝐶𝐀𝑣(𝑋)Ker𝑁(𝐀)𝑣(𝑋)𝑝𝑖=1𝐴Ker𝑖𝐴𝑇𝑖𝑣(𝑋)𝑝𝑖=1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖𝑃𝑖1𝑃𝑖𝑇𝑃𝑣(𝑋)Im𝑖𝑃𝑖1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖;𝑖𝑝𝑣(𝑋)𝑝𝑖=1𝑃Im𝑖𝑃𝑖1(𝑌)𝑣(𝑋)𝑝𝑖=1𝑃Im𝑖𝑃𝑖1𝑌𝑖,(B.8) where 𝑌𝑖Ker(𝐽𝐴𝑖(𝐽𝑇𝐴𝑖)); 𝑖𝑝 and 𝑌(𝑝𝑖=1(Ker(𝐽𝐴𝑖(𝐽𝑇𝐴𝑖)))). Property (i) has been proved. The first inequality of Property (ii) follows directly from Property (i). The results of equalities and inequalities in the second line of Property (ii) follow by the first inequality by taking into account Theorem 2.3. Property (iii) follows from the proved equivalent definitions of 𝐶𝐀 in Property (i) by taking into account that [𝐴𝑗,𝐴𝑗]=0; 𝑗𝑝 so that 𝑣𝐴𝑗𝑝𝑖=1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖𝑃𝑖1𝑃𝑖𝑇𝐴𝑣𝑗𝑝𝑖(𝑗)=1𝐽Ker𝐴𝑖𝐽𝑇𝐴𝑖𝑃𝑖1𝑃𝑖𝑇;𝑗𝑝.(B.9)

C. Proofs of the Results of Section 4

Proofs of Propositions 4.14.3Proposition 4.1 follows by inspection of (4.1). Proposition 4.2 implies that Proposition 4.1 holds with the four involved commutators being zero. Then the left condition of Proposition 4.2 implies that 𝐵𝐶𝐴, from Proposition 4.1, so that Proposition 4.2 holds. Proposition 4.3 is equivalent to Proposition 4.2.

Proof of Theorem 4.4. (i) Equation (4.2) is a rearrangement in an equivalent algebraic system of Proposition 4.1 in the unknown 𝑣(𝐵im) for given 𝐴 and 𝐵re. The system is compatible if (4.2) holds from the Kronecker-Capelli theorem. The proof of Property (ii) is similar to that of (i) with the appropriate interchange of roles of 𝐵re and 𝐵im.
(iii) Since 𝐴rankim𝐴𝑇im𝐴re𝐴𝑇re<𝑛2(C.1) from Theorem 3.3(i) then 0𝐵=𝐵re𝐶𝐴 if and only if 𝐵re𝐶𝐴re𝐶𝐴im()𝐶𝐴. The same proof follows for 0𝐵=𝐵im𝐶𝐴 since 𝐴rankre𝐴𝑇re𝐴im𝐴𝑇im𝐴=rankim𝐴𝑇im𝐴re𝐴𝑇re<𝑛2.(C.2)

Proof of Theorem 4.5. (i) It follows in the same way as that of Theorem 4.4 by rewriting the algebraic system (4.3) in the form (4.5) which has nonzero solutions if (4.8) holds. But (4.8) always holds since 𝐵=𝐴𝐶𝐴𝐂𝑛×𝑛 is nonzero if 𝐴 is nonzero and if 𝐴=0𝐂𝑛×𝑛 then 𝐶𝐴=𝐂𝑛×𝑛.
(ii) Direct calculations yield the equivalence of (4.5) with the separation into real and imaginary parts of the subsequent algebraic system: 𝐴𝐼𝑛𝐼𝑛𝐴𝑣𝐴(𝐵)=re+𝐢𝐴im𝐼𝑛𝐼𝑛𝐴𝑇re𝐢𝐴𝑇im𝑣𝐵re𝐵+𝐢𝑣im=0(C.3) which is always solvable with a nonzero solution (i.e., compatible) since rank(𝐴𝐼𝑛𝐼𝑛𝐴)<𝑛2 (otherwise, 𝐴(0)𝐶𝐴).

Outline of Proof of Theorem 4.6
(i) It is a direct extension of Theorem 4.5 by decomposing the involved complex matrices in their real and imaginary parts since from Theorem 3.3(i) both left block matrices in the coefficient matrix of (4.11) have rank less than 𝑛2. As a result, such a coefficient matrix has rank less than 2𝑛2 so that nonzero solutions exists to the algebraically compatible system of linear equations (4.11). As a result, a nonzero 𝑛-square complex commuting matrix exists.
(ii) It is close to that of (i) but the rank condition for compatibility of the algebraic system is not needed since the coefficient matrix of (4.11) is rank defective since 𝐴𝑗𝐀𝐶(𝑣𝑇(𝐴𝑗re),𝑣𝑇(𝐴𝑗im))𝑇 is in the null space of the coefficient matrix; 𝑗𝑝.
(iii) Its proof is close to that of Theorem 4.5(ii) and it is then omitted.

Proof of Theorem 4.8. For any 𝐵𝐶𝐴𝐂𝑛×𝑛: []𝐴,𝐵=0𝜆𝐼𝑛𝐵𝐴=𝐴𝜆𝐼𝑛𝐵;𝜆𝐂𝜆𝐼𝑛𝐵1𝐴=𝐴𝜆𝐼𝑛𝐵1;𝜆𝐂[]1𝜎(𝐵)𝐴,𝑓(𝐵)=𝐴2𝜋𝐢𝐶𝑓(𝜆)𝜆𝐼𝑛𝐵1=1𝑑𝜆2𝜋𝐢𝐶𝑓(𝜆)𝜆𝐼𝑛𝐵1=1𝐴𝑑𝜆2𝜋𝐢𝐶𝑓(𝜆)𝜆𝐼𝑛𝐵1[]𝑑𝜆𝐴=𝑓(𝐵),𝐴=0,(C.4) where 𝐶 is the boundary of 𝐷 and consists in a set of closed rectifiable Jordan curves which contains no point of 𝜎(𝐴) since 𝜆𝐂𝜎(𝐴) so that the identity (𝜆𝐼𝑛𝐵)1𝐴=𝐴(𝜆𝐼𝑛𝐵)1 is true. Then, 𝑓(𝐵)𝐶𝐴𝐂𝑛×𝑛 has been proved. From Theorem 4.5, this is equivalent to 𝑣(𝑓(𝐵))Ker(𝐴(𝐴)).

Proof of Theorem 4.13. 𝐵𝐶𝐴𝐹[𝐴,𝐵]𝐺=0;𝐹,𝐺𝐂𝑛×𝑛 being nonsingular. By choosing 𝐹1=𝐺=𝑇, it follows that 𝑇1[]𝐴,𝐵𝑇=𝑇1𝐴𝑇𝑇1𝐵𝑇𝑇1𝐵𝑇𝑇1Λ𝐴𝑇=𝐴,Λ𝐵=0.(C.5)

Acknowledgments

The author is grateful to the Spanish Ministry of Education by its partial support of this work through Grant DPI2006-00714. He is also grateful to the Basque Government by its support through Grants GIC07143-IT-269-07 and SAIOTEK S-PE08UN15. Finally, he is grateful to the reviewers by their interesting suggestions.