Abstract

This paper investigates the necessary and sufficient condition for a set of (real or complex) matrices to commute. It is proved that the commutator [𝐴,đĩ]=0 for two matrices 𝐴 and đĩ if and only if a vector đ‘Ŗ(đĩ) defined uniquely from the matrix đĩ is in the null space of a well-structured matrix defined as the Kronecker sum 𝐴⊕(−𝐴∗), which is always rank defective. This result is extendable directly to any countable set of commuting matrices. Complementary results are derived concerning the commutators of certain matrices with functions of matrices 𝑓(𝐴) which extend the well-known sufficiency-type commuting result [𝐴,𝑓(𝐴)]=0.

1. Introduction

The problem of commuting operators and matrices, in particular, is very relevant in a significant number of problems of several branches of science, which are very often mutually linked, cited herein after.

(1) In several fields of interest in Applied Mathematics or Linear Algebra [1–22] including Fourier transform theory, graph theory where, for instance, the commutativity of the adjacency matrices is relevant [1, 17–19, 21–35], Lyapunov stability theory with conditional and unconditional stability of switched dynamic systems involving discrete systems, delayed systems, and hybrid systems where there is a wide class of topics covered including their corresponding adaptive versions including estimation schemes (see, e.g., [23–41]). Generally speaking, linear operators, and in particular matrices, which commute share some common eigenspaces. On the other hand, a known mathematical result is that two graphs with the same vertex set commute if their adjacency matrices commute [16]. Graphs are abstract representations of sets of objects (vertices) where some pairs of them are connected by links (arcs/edges). Graphs are often used to describe behaviors of multiconfiguration switched systems where nodes represent each parameterized dynamics and arcs describe allowed switching transitions [35]. They are also used to describe automatons in Computer Science. Also, it has been proven that equalities of products involving two linear combinations of two any length products having orthogonal projectors (i.e., Hermitian idempotent matrices) as factors are equivalent to a commutation property [21].

(2) In some fields in Engineering, such as multimodel regulation and Parallel multiestimation [36–41]. Generally speaking switching among configurations can improve the transient behavior. Switching can be performed arbitrarily (i.e., at any time instant) through time while guaranteeing closed-loop stability if a subset of the set of configurations is stable provided that a common Lyapunov function exists for them. This property is directly related to certain pair wise commutators of matrices describing configuration dynamics being zero [7, 10, 11, 14, 15]. Thus, the problem of commuting matrices is in fact of relevant interest in dynamic switched systems, namely, those which possess several parameterized configurations, one of them, is becoming active at each current time interval. If the matrices of dynamics of all the parameterizations commute then there exists a common Lyapunov function for all those parameterizations and any arbitrary switching rule operating at any time instant maintains the global stability of the switched rule provided that all the parameterizations are stable [7]. This property has been described also in [23–25, 28–30] and many other references therein. In particular, there are recent studies which prove that, in these circumstances, arbitrary switching is possible if the matrices of dynamics of the various configurations commute while guaranteeing closed-loop stability. This principle holds not only in both the continuous-time delay-free case and in the discrete-time one but even in configurations involving time-delay and hybrid systems as well. See, for instance, [10–15, 27–30, 34–41] and references therein. The set of involved problems is wide enough like, for instance, switched multimodel techniques [27–30, 35, 36, 40, 41], switched multiestimation techniques with incorporated parallel multiestimation schemes involving adaptive control [34, 38–40], time delay and hybrid systems with several configurations under mutual switching, and so forth [10, 11, 14, 15] and references therein. Multimodel tools and their adaptive versions incorporating parallel multiestimation are useful to improve the regulation and tracking transients including those related to triggering circuits with regulated transient via multiestimation [36], master-slave tandems [39], and so forth. However, it often happens that there is no common Lyapunov function for all the parameterizations becoming active at certain time intervals. Then, a minimum residence (or dwelling) time at each active parameterization has to be respected before performing the next switching in order to guarantee the global stability of the whole switched system so that the switching rule among distinct parameterizations is not arbitrary [7, 12, 13, 27–30, 34–41].

(3) In some problems of Signal Processing. See, for instance, [1, 17, 18] concerning the construction of DFT (Discrete Fourier transform)-commuting matrices. In particular, a complete orthogonal set of eigenvectors can be obtained for several types of offset DFT’s and DCT’s under commutation properties.

(4) In certain areas of Physics, and in particular, in problems related to Quantum Mechanics. See, for instance, [22, 42, 43]. Basically, a complete set of commuting observables is a set of commuting operators whose eigenvalues completely specify the state of a system since they share eigenvectors and can be simultaneously measured [22, 42, 43]. These Quantum Mechanics tools have also inspired other Science branches. For instance, it is investigated in the above mentioned reference [18] a commuting matrix whose eigenvalue spectrum is very close to that of the Gauss-Hermite differential operator. It is proven that it furnishes two generators of the group of matrices which commute with the discrete Fourier transform. It is also pointed out that the associate research inspired in Quantum Mechanics principles. There is also other relevant basic scientific applications of commuting operators. For instance, the symmetry operators in the point group of a molecule always commute with its Hamiltonian operator [20]. The problem of commuting matrices is also relevant to analyze the normal modes in dynamic systems or the discussion of commuting matrices dependent on a parameter (see, e.g., [2, 3]).

It is well known that commuting matrices have at least a common eigenvector and also, a common generalized eigenspace [4, 5]. A less restrictive problem of interest in the above context is that of almost commuting matrices, roughly speaking, the norm of the commutator is sufficiently small [5, 6]. A very relevant related result is that the sum of matrices which commute is an infinitesimal generator of a đļ0-semigroup. This leads to a well-known result in Systems Theory establishing that the matrix function 𝑒𝐴1𝑡1+𝐴2𝑡2=𝑒𝐴1𝑡1𝑒𝐴2𝑡2 is a fundamental (or state transition) matrix for the cascade of the time invariant differential systems Ė‡đ‘Ĩ1(𝑡)=𝐴1đ‘Ĩ1(𝑡), operating on a time 𝑡1, and Ė‡đ‘Ĩ2(𝑡)=𝐴2đ‘Ĩ2(𝑡), operating on a time 𝑡2, provided that 𝐴1 and 𝐴2 commute (see, e.g., [7–11]).

Most of the abundant existing researches concerning sets of commuting operators, in general, and matrices, in particular, are based on the assumption of the existence of such sets implying that each pair of mutual commutators is zero. There is a gap in giving complete conditions guaranteeing that such commutators within the target set are zero. This paper formulates the necessary and sufficient condition for any countable set of (real or complex) matrices to commute. The sequence of obtained results is as follows. Firstly, the commutation of two real matrices is investigated in Section 2. The necessary and sufficient condition for two matrices to commute is that a vector defined uniquely from the entries of any of the two given matrices belongs to the null space of the Kronecker sum of the other matrix and its minus transpose. The above result allows a simple algebraic characterization and computation of the set of commuting matrices with a given one. It also exhibits counterparts for the necessary and sufficient condition for two matrices not to commute. The results are then extended to the necessary and sufficient condition for commutation of any set of real matrices in Section 3. In Section 4, the previous results are directly extended to the case of complex matrices in two very simple ways, namely, either by decomposing the associated algebraic system of complex matrices into two real ones or by manipulating it directly as a complex algebraic system of equations. Basically, the results for the real case are directly extendable by replacing transposes by conjugate transposes. Finally, further results concerning the commutators of matrices with matrix functions are also discussed in Section 4. The proofs of the main results in Sections 2, 3, and 4 are given in corresponding Appendices A, B, and C. It may be pointed out that there is implicit following duality of the main result. Since a necessary and sufficient condition for a set of matrices to commute is formulated and proven, the necessary and sufficient condition for a set of matrices not to commute is just the failure in the above one to hold.

1.1. Notation

[𝐴,đĩ] is the commutator of the square matrices 𝐴 and đĩ.

𝐴⊗đĩâˆļ=(𝑎𝑖𝑗đĩ) is the Kronecker (or direct) product of 𝐴âˆļ=(𝑎𝑖𝑗) and đĩ.

𝐴⊕đĩâˆļ=𝐴⊗đŧ𝑛+đŧ𝑛⊗đĩ is the Kronecker sum of the square matrices 𝐴âˆļ=(𝑎𝑖𝑗) and both of order 𝑛, where đŧ𝑛 is the nth identity matrix.

𝐴𝑇 is the transpose of the matrix 𝐴 and 𝐴∗ is the conjugate transpose of the complex matrix 𝐴. For any matrix 𝐴, Im𝐴 and Ker𝐴 are its associate range (or image) subspace and null space, respectively. Also, rank(𝐴) is the rank of 𝐴 which is the dimension of Im(𝐴) and det(𝐴) is the determinant of the square matrix 𝐴.

đ‘Ŗ(𝐴)=(𝑎𝑇1,𝑎𝑇2,â€Ļ,𝑎𝑇𝑛)𝑇∈𝐂𝑛2 if 𝑎𝑇𝑖âˆļ=(𝑎𝑖1,𝑎𝑖2,â€Ļ,𝑎𝑖𝑛) is the ith row of the square matrix 𝐴.

𝜎(𝐴) is the spectrum of 𝐴;𝑛âˆļ={1,2,â€Ļ,𝑛}. If 𝜆𝑖∈𝜎(𝐴) then there exist positive integers 𝜇𝑖 and 𝜈𝑖≤𝜇𝑖 which are, respectively, its algebraic and geometric multiplicity; that is, the times it is repeated in the characteristic polynomial of 𝐴 and the number of its associate Jordan blocks, respectively. The integer 𝜇≤𝑛 is the number of distinct eigenvalues and the integer 𝑚𝑖, subject to 1≤𝑚𝑖≤𝜇𝑖, is the index of 𝜆𝑖∈𝜎(𝐴); ∀𝑖∈𝜇, that is, its multiplicity in the minimal polynomial of 𝐴.

𝐴âˆŧđĩ denotes a similarity transformation from 𝐴 to đĩ=𝑇−1𝐴𝑇 for given 𝐴,đĩâˆˆđ‘đ‘›Ã—đ‘› for some nonsingular đ‘‡âˆˆđ‘đ‘›Ã—đ‘›. 𝐴≈đĩ=𝐸𝐴𝐹 means that there is an equivalence transformation for given 𝐴,đĩâˆˆđ‘đ‘›Ã—đ‘› for some nonsingular 𝐸,đšâˆˆđ‘đ‘›Ã—đ‘›.

A linear transformation from 𝐑𝑛 to 𝐑𝑛, represented by the matrix đ‘‡âˆˆđ‘đ‘›Ã—đ‘›, is denoted identically to such a matrix in order to simplify the notation. If 𝑉≠Dom𝑇≡𝐑𝑛 is a subspace of 𝐑𝑛 then Im𝑇(𝑉)âˆļ={𝑇𝑧âˆļ𝑧∈𝑉} and Ker𝑇(𝑉)âˆļ={𝑧∈𝑉âˆļ𝑇𝑧=0∈𝐑𝑛}. If 𝑉≡𝐑𝑛, the notation is simplified to Im𝑇âˆļ={𝑇𝑧âˆļ𝑧∈𝐑𝑛} and Ker𝑇âˆļ={𝑧∈𝐑𝑛âˆļ𝑇𝑧=0∈𝐑𝑛}.

The symbols “⋀’’ and “∨’’ stand for logic conjunction and disjunction, respectively. The abbreviation “iff’’ stands for “if and only if.’’ The notation card 𝑈 stands for the cardinal of the set 𝑈. đļ𝐴 (resp., đļ𝐴) is the set of matrices which commute (resp., do not commute) with a matrix 𝐴. đļ𝐀 (resp., đļ𝐀) is the set of matrices which commute (resp., do not commute) with all square matrix 𝐴𝑖 belonging to a given set 𝐀.

2. Results Concerning the Sets of Commuting and No Commuting Matrices with a Given One

Consider the sets đļ𝐴âˆļ={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝐴,𝑋]=0}≠∅, of matrices which commute with A, and đļ𝐴âˆļ={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝐴,𝑋]≠0}, of matrices which do not commute with 𝐴; âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›. Note that 0âˆˆđ‘đ‘›Ã—đ‘›âˆŠđļ𝐴; that is, the zero n-matrix commutes with any n-matrix so that, equivalently, 0âˆ‰đ‘đ‘›Ã—đ‘›âˆŠđļ𝐴 and then đļ𝐴∩đļ𝐴=∅; âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›. The subsequent two basic results which follow are concerned with commutation and noncommutation of two real matrices 𝐴 and 𝑋. The used tool relies on the calculation of the null space and the range space of the Kronecker sum of the matrix 𝐴, one of the matrices, with its minus transpose. A vector built with all the entries of the other matrix 𝑋 has to belong to one of the above spaces for 𝐴 and 𝑋 to commute and to the other one in order that 𝐴 and 𝑋 not to be two commuting matrices.

Proposition 2.1. (i) đļ𝐴={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∈Ker(𝐴⊕(−𝐴𝑇))}.
(ii) đļ𝐴=đ‘đ‘›Ã—đ‘›â§ĩCA={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∉Ker(𝐴⊕(−𝐴𝑇))}≡{đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∈Im(𝐴⊕(−𝐴𝑇))}.
(iii) đĩ∈đļ𝐴âˆļ={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∈Ker(𝐴⊕(−𝐴𝑇))}.

Note that according to Proposition 2.1 the set of matrices đļ𝐴 which commute with the square matrix 𝐴 and its complementary đļ𝐴 (i.e., the set of matrices which do not commute with 𝐴) can be redefined in an equivalent way by using their given expanded vector forms.

Proposition 2.2. One has rank𝐴⊕−𝐴𝑇<𝑛2âŸēKer𝐴⊕−𝐴𝑇≠0âŸē0∈𝜎𝐴⊕−𝐴𝑇âŸē∃𝑋(≠0)∈đļ𝐴,âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›.(2.1)

Proof. One has [𝐴,𝐴]=0;âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›â‡’âˆƒđ‘đ‘›2∋0≠đ‘Ŗ(𝐴)∈Ker(𝐴⊕(−𝐴𝑇)); âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›. As a result, Ker𝐴⊕−𝐴𝑇≠0∈𝐑𝑛2;âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›î€ˇî€ˇâŸērank𝐴⊕−𝐴𝑇<𝑛2;âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›(2.2) so that 0∈𝜎(𝐴⊕(−𝐴𝑇)).
Also, ∃𝑋(≠0)âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝐴,𝑋]=0⇔𝑋∈đļ𝐴 since Ker(𝐴⊕(−𝐴𝑇))≠0∈𝐑𝑛2.
Then, Proposition 2.2 has been proved.

The subsequent mathematical result is stronger than Proposition 2.2 and is based on characterization of the spectrum and eigenspaces of 𝐴⊕(−𝐴𝑇).

Theorem 2.3. The following properties hold.
(i) The spectrum of 𝐴⊕(−𝐴𝑇) is 𝜎(𝐴⊕(−𝐴𝑇))={𝜆𝑖𝑗=𝜆𝑖−𝜆𝑗âˆļ𝜆𝑖,𝜆𝑗∈𝜎(𝐴);∀𝑖,𝑗∈𝑛} and possesses 𝜈 Jordan blocks in its Jordan canonical form of, subject to the constraints 𝑛2â‰Ĩ∑𝜈=dim𝑆=(𝜇𝑖=1𝜈𝑖)2â‰Ĩ𝜈(0), and 0∈𝜎(𝐴⊕(−𝐴𝑇)) with an algebraic multiplicity 𝜇(0) and with a geometric multiplicity 𝜈(0) subject to the constraints: 𝑛2=𝜇𝑖=1𝜇𝑖îƒĒ2â‰Ĩ𝜇(0)â‰Ĩ𝜇𝑖=1𝜇2𝑖â‰Ĩ𝜈(0)=𝜇𝑖=1𝜈2𝑖â‰Ĩ𝑛,(2.3) where
(a)𝑆âˆļ=span{𝑧𝑖⊗đ‘Ĩ𝑗,∀𝑖,𝑗∈𝑛}, 𝜇𝑖=𝜇(𝜆𝑖) and 𝜈𝑖=𝜈(𝜆𝑖) are, respectively, the algebraic and the geometric multiplicities of 𝜆𝑖∈𝜎(𝐴), ∀𝑖∈𝑛; 𝜇≤𝑛 is the number of distinct 𝜆𝑖∈𝜎(𝐴)(𝑖∈𝜇), 𝜇𝑖=𝜇(𝜆𝑖𝑗) and 𝜈𝑖𝑗=𝜈(𝜆𝑖𝑗), are, respectively, the algebraic and the geometric multiplicity of 𝜆𝑖𝑗=(𝜆𝑖−𝜆𝑗)∈𝜎(𝐴⊕(−𝐴𝑇)), ∀𝑖,𝑗∈𝑛; 𝜇≤𝑛,(b)đ‘Ĩ𝑗 and 𝑧𝑖 are, respectively, the right eigenvectors of 𝐴 and 𝐴𝑇 with respective associated eigenvalues 𝜆𝑗 and 𝜆𝑖;∀𝑖,𝑗∈𝑛.
(ii) One has dimIm𝐴⊕−𝐴𝑇=rank𝐴⊕−𝐴𝑇=𝑛2−𝜈(0)âŸēdimKer𝐴⊕−𝐴𝑇=𝜈(0);âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›.(2.4)

Expressions which calculate the sets of matrices which commute and which do not commute with a given one are obtained in the subsequent result.

Theorem 2.4. The following properties hold.
(i) One has 𝑋∈đļ𝐴iīŦ€đ´âŠ•âˆ’đ´đ‘‡đ‘Ŗ(𝑋)=0âŸē𝑋∈đļ𝐴đ‘ŖiīŦ€đ‘Ŗ(𝑋)=−𝐹𝑇𝑋2𝐴𝑇12𝐴−𝑇11,đ‘Ŗ𝑇𝑋2𝑇(2.5) for any đ‘Ŗ(𝑋2)∈Ker(𝐴22−𝐴21𝐴−111𝐴12), where 𝐸,𝐹∈𝐑𝑛2Ã—đ‘›2 are permutation matrices and đ‘‹âˆˆđ‘đ‘›Ã—đ‘› and đ‘Ŗ(𝑋)∈𝐑𝑛2 are defined as follows.
(a)One has đ‘Ŗ𝑋âˆļ=𝐹−1đ‘Ŗ(𝑋),𝐴⊕−𝐴𝑇≈𝐴âˆļ=𝐸𝐴⊕−𝐴𝑇𝐹,∀𝑋∈đļ𝐴,(2.6) where đ‘Ŗ(𝑋)=(đ‘Ŗ𝑇(𝑋1),đ‘Ŗ𝑇(𝑋2))𝑇∈𝐑𝑛2 with đ‘Ŗ(𝑋1)∈𝐑𝜈(0) and đ‘Ŗ(𝑋2)∈𝐑𝑛2−𝜈(0).(b)The matrix 𝐴11∈𝐑𝜈(0)Ã—đœˆ(0) is nonsingular in the block matrix partition 𝐴âˆļ=Blockmatrix(𝐴𝑖𝑗;𝑖,𝑗∈2) with 𝐴12∈𝐑𝜈(0)Ã—đ‘›2, 𝐴21∈𝐑(𝑛2−𝜈(0))Ã—đœˆ(0) and 𝐴22∈𝐑(𝑛2−𝜈(0))×(𝑛2−𝜈(0)).
(ii) 𝑋∈đļ𝐴, for any given 𝐴(≠0)âˆˆđ‘đ‘›Ã—đ‘›, if and only if 𝐴⊕−𝐴𝑇đ‘Ŗ(𝑋)=đ‘Ŗ(𝑀)(2.7) for some 𝑀(≠0)âˆˆđ‘đ‘›Ã—đ‘› such that rank𝐴⊕−𝐴𝑇=rank𝐴⊕−𝐴𝑇,đ‘Ŗ(𝑀)=𝑛2−𝜈(0).(2.8) Also, đļ𝐴î€Ŋâˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļ𝐴⊕−𝐴𝑇đ‘Ŗ(𝑋)=đ‘Ŗ(𝑀)forany𝑀(≠0)âˆˆđ‘đ‘›Ã—đ‘›î€ˇî€ˇsatisfyingrank𝐴⊕−𝐴𝑇=rank𝐴⊕−𝐴𝑇,đ‘Ŗ(𝑀)=𝑛2−.𝜈(0)(2.9) Also, with the same definitions of 𝐸, 𝐹, and 𝑋 in (i), 𝑋∈đļ𝐴 if and only if đ‘Ŗđ‘Ŗ(𝑋)=𝐹𝑇𝑀1𝐴−𝑇11−đ‘Ŗ𝑇𝑋2𝐴𝑇12𝐴−𝑇11,đ‘Ŗ𝑇𝑋2𝑇,(2.10) where đ‘Ŗ(𝑋2) is any solution of the compatible algebraic system 𝐴22−𝐴21𝐴−111𝐴12đ‘Ŗ𝑋2=đ‘Ŗ𝑀2−𝐴21𝐴−111đ‘Ŗ𝑀1(2.11) for some 𝑀(≠0)âˆˆđ‘đ‘›Ã—đ‘› such that 𝑋,đ‘€âˆˆđ‘đ‘›Ã—đ‘› which are defined according to đ‘Ŗ(𝑋)=𝐹đ‘Ŗ(𝑋) and 𝑀=𝐸𝑀𝐹≈𝑀(≠0)âˆˆđ‘đ‘›Ã—đ‘› with đ‘Ŗ(𝑀)=𝐸đ‘Ŗ(𝑀)=𝐸(đ‘Ŗ𝑇1(𝑀),đ‘Ŗ𝑇2(𝑀))𝑇.

3. Results Concerning Sets of Pair Wise Commuting Matrices

Consider the following sets.

(1)A set of nonzero 𝑝â‰Ĩ2 distinct pair wise commuting matrices 𝐀đļâˆļ={đ´đ‘–âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝐴𝑖,𝐴𝑗]=0;∀𝑖,𝑗∈𝑝}.(2)The set of matrices MC𝐀đļâˆļ={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝑋,𝐴𝑖]=0;∀𝐴𝑖∈𝐀đļ} which commute with the set 𝐀đļ of pair wise commuting matrices.(3)A set of matrices đļ𝐀âˆļ={đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝑋,𝐴𝑖]=0;∀𝐴𝑖∈𝐀} which commute with a given set of nonzero 𝑝 matrices 𝐀âˆļ={đ´đ‘–âˆˆđ‘đ‘›Ã—đ‘›;∀𝑖∈𝑝} which are not necessarily pair wise commuting.

The complementary sets of MC𝐀đļ and đļ𝐀 are MC𝐀đļ and đļ𝐀, respectively, so that đ‘đ‘›Ã—đ‘›âˆ‹đĩ∈MC𝐀đļ if đĩ∉MC𝐀đļ and đ‘đ‘›Ã—đ‘›âˆ‹đĩ∈đļ𝐀 if đĩ∉đļ𝐀. Note that đļ𝐀đļ=MC𝐴đļ for a set of pair wise commuting matrices 𝐀đļ so that the notation MC𝐴đļ is directly referred to a set of matrices which commute with all those in a set of pair wise commuting matrices. The following two basic results are concerned with the commutation and noncommutation properties of two matrices.

Proposition 3.1. The following properties hold. (i)One has 𝐴𝑖∈𝐀đļ;∀𝑖∈𝐴𝑝âŸēđ‘Ŗ𝑖∈𝑗(≠𝑖)∈𝑝𝐴Ker𝑗⊕−𝐴𝑇𝑗;∀𝑖∈𝑝𝐴âŸēđ‘Ŗ𝑖∈𝑖+1≤𝑗≤𝑝𝐴Ker𝑗⊕−𝐴𝑇𝑗;∀𝑖∈𝑝.(3.1)(ii)Define 𝑁𝑖𝐀đļ𝐴âˆļ=𝑇1⊕−𝐴1𝐴𝑇2⊕−𝐴2î€¸â‹¯đ´đ‘‡đ‘–âˆ’1⊕−𝐴𝑖−1𝐴𝑇𝑖+1⊕−𝐴𝑖+1î€¸â‹¯đ´đ‘‡đ‘âŠ•î€ˇâˆ’đ´đ‘î€¸îžđ‘‡âˆˆđ‘(𝑝−1)𝑛2Ã—đ‘›2.(3.2) Then 𝐴𝑖∈𝐀đļ;∀𝑖∈𝑝 if and only if đ‘Ŗ(𝐴𝑖)∈Ker𝑁𝑖(𝐀đļ);∀𝑖∈𝑝.(iii)One has MC𝐀đļ⎧âŽĒ⎨âŽĒ⎊âˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›î™âˆļđ‘Ŗ(𝑋)∈𝑖∈𝑝𝐴Ker𝑖⊕−𝐴𝑇𝑖;𝐴𝑖∈𝐀đļâŽĢâŽĒâŽŦâŽĒ⎭=î€Ŋđ‘‹âˆˆđ‘đ‘›Ã—đ‘›î€ˇđ€âˆļđ‘Ŗ(𝑋)∈Ker𝑁đļ⊃đļ𝐀đļ⊃𝐀đļ⊃{0}âˆˆđ‘đ‘›Ã—đ‘›,(3.3) where 𝑁(𝐀đļ)âˆļ=[𝐴𝑇1⊕(−𝐴1)𝐴𝑇2⊕(−𝐴2)â‹¯đ´đ‘‡đ‘âŠ•(−𝐴𝑝)]𝑇∈𝐑𝑝𝑛2Ã—đ‘›2,𝐴𝑖∈𝐀đļ.(iv)One has MC𝐀đļ⎧âŽĒ⎨âŽĒ⎊âˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›îšâˆļđ‘Ŗ(𝑋)∈𝑖∈𝑝𝐴Im𝑖⊕−𝐴𝑇𝑖;𝐴𝑖∈𝐀đļâŽĢâŽĒâŽŦâŽĒ⎭=î€Ŋđ‘‹âˆˆđ‘đ‘›Ã—đ‘›î€ˇđ€âˆļđ‘Ŗ(𝑋)∈Im𝑁đļ.(3.4)(v)One has đļ𝐀⎧âŽĒ⎨âŽĒ⎊âˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›î™âˆļđ‘Ŗ(𝑋)∈𝑖∈𝑝𝐴Ker𝑖⊕−𝐴𝑇𝑖;𝐴𝑖âŽĢâŽĒâŽŦâŽĒ⎭=î€Ŋâˆˆđ€đ‘‹âˆˆđ‘đ‘›Ã—đ‘›î€ž,âˆļđ‘Ŗ(𝑋)∈Ker𝑁(𝐀)(3.5) where 𝑁(𝐀)âˆļ=[𝐴𝑇1⊕(−𝐴1)𝐴𝑇2⊕(−𝐴2)â‹¯đ´đ‘‡đ‘âŠ•(−𝐴𝑝)]𝑇∈𝐑𝑝𝑛2Ã—đ‘›2,𝐴𝑖∈𝐀.(vi)One has đļ𝐀⎧âŽĒ⎨âŽĒ⎊âˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›îšâˆļđ‘Ŗ(𝑋)∈𝑖∈𝑝𝐴Im𝑖⊕−𝐴𝑇𝑖;𝐴𝑖âŽĢâŽĒâŽŦâŽĒ⎭=î€Ŋâˆˆđ€đ‘‹âˆˆđ‘đ‘›Ã—đ‘›î€ž.âˆļđ‘Ŗ(𝑋)∈Im𝑁(𝐀)(3.6)

Concerning Proposition 3.1(v)-(vi), note that if 𝑋∈đļ𝐀, then 𝑋≠0 since đ‘đ‘›Ã—đ‘›âˆ‹0∈đļ𝐀. The following result is related to the rank defectiveness of the matrix 𝑁(𝐀đļ) and any of their submatrices since 𝐀đļ is a set of pair wise commuting matrices.

Proposition 3.2. The following properties hold: 𝑛2𝐀>rank𝑁đļâ‰Ĩrank𝑁𝑖𝐀đļ𝐴â‰Ĩrank𝑗⊕−𝐴𝑇𝑗;∀𝐴𝑗∈𝐀đļ;∀𝑖,𝑗∈𝑝(3.7) and, equivalently, 𝑁det𝑇𝐀đļ𝑁𝐀đļ𝑁=det𝑇𝑖𝐀đļ𝑁𝑖𝐀đļ𝐴=det𝑗⊕−𝐴𝑇𝑗=0;∀𝐴𝑗∈𝐀đļ;∀𝑖,𝑗∈𝑛.(3.8)

Results related to sufficient conditions for a set of matrices to pair wise commute are abundant in literature. For instance, diagonal matrices are always pair wise commuting. Any sets of matrices obtained via multiplication by real scalars with any given arbitrary matrix are sets of pair wise commuting matrices. Any set of matrices obtained by linear combinations of one of the above sets consists also of pair wise commuting matrices. Any matrix commutes with any of its matrix functions, and so forth. In the following, a simple, although restrictive, sufficient condition for rank defectiveness of 𝑁(𝐀) of some set 𝐀 of 𝑝 square real 𝑛-matrices is discussed. Such a condition may be useful as a practical test to elucidate the existence of a nonzero 𝑛-square matrix which commutes with all matrices in this set. Another useful test obtained from the following result relies on a necessary condition to elucidate if the given set consists of pair wise commuting matrices.

Theorem 3.3. Consider any arbitrary set of nonzero 𝑛-square real matrices 𝐀âˆļ={𝐴1,𝐴2,â€Ļ,𝐴𝑝} for any integer 𝑝â‰Ĩ1 and define matrices: 𝑁𝑖(î€ē𝐴𝐀)âˆļ=𝑇1⊕−𝐴1𝐴𝑇2⊕−𝐴2î€¸â‹¯đ´đ‘‡đ‘–âˆ’1⊕−𝐴𝑖−1𝐴𝑇𝑖+1⊕−𝐴𝑖+1î€¸â‹¯đ´đ‘‡đ‘âŠ•î€ˇâˆ’đ´đ‘î€¸î€ģ𝑇,𝐴𝑁(𝐀)âˆļ=𝑇1⊕−𝐴1𝐴𝑇2⊕−𝐴2î€¸â‹¯đ´đ‘‡đ‘âŠ•î€ˇâˆ’đ´đ‘î€¸îžđ‘‡.(3.9) Then, the following properties hold: (i)rank(𝐴𝑖⊕(−𝐴𝑖))≤rank𝑁𝑖(𝐀)≤rank𝑁(𝐀)<𝑛2;∀𝑖∈𝑝.(ii)⋂𝑖∈𝑝Ker(𝐴𝑖⊕(−𝐴𝑇𝑖))≠{0} so that ∃𝑋(≠0)∈đļ𝐀,𝑋∈đļ𝐀âŸēđ‘Ŗ(𝑋)∈𝑖∈𝑝𝐴Ker𝑖⊕−𝐴𝑇𝑖,𝑋∈đļ𝐀âŸēđ‘Ŗ(𝑋)∈𝑖∈𝑝𝐴Im𝑖⊕−𝐴𝑇𝑖.(3.10)(iii)If 𝐀=𝐀đļ is a set of pair wise commuting matrices then đ‘Ŗ𝐴𝑖∈𝑗∈𝑝â§ĩ𝑖𝐴Ker𝑗⊕−𝐴𝑇𝑗;∀𝑖∈𝑝𝐴âŸēđ‘Ŗ𝑖∈𝑖∈𝑝𝐴Ker𝑖⊕−𝐴𝑇𝑖;∀𝑖∈𝑝𝐴âŸēđ‘Ŗ𝑖∈𝑖∈𝑝â§ĩ{𝑖}𝐴Ker𝑖⊕−𝐴𝑇𝑖;∀𝑖∈𝑝.(3.11)(iv)One has M𝐀đļ⎧âŽĒ⎨âŽĒ⎊âˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›î™âˆļđ‘Ŗ(𝑋)𝑖∈𝑝𝐴Ker𝑖⊕−𝐴𝑇𝑖,∀𝐴𝑖∈𝐀đļâŽĢâŽĒâŽŦâŽĒ⎭⊃𝐀đļâˆĒ{0}âˆˆđ‘đ‘›Ã—đ‘›(3.12) with the above set inclusion being proper.

Note that Theorem 3.3(ii) extends Proposition 3.1(v) since it is proved that đļ𝐀â§ĩ{0}≠∅ because all nonzero đ‘đ‘›Ã—đ‘›âˆ‹Î›=diag(đœ†đœ†â‹¯đœ†)∈đļ𝐀 for any 𝐑∋𝜆≠0 and any set of matrices 𝐀. Note that Theorem 3.3(iii) establishes that đ‘Ŗ(𝐴𝑖⋂)∈𝑖∈𝑝â§ĩ{𝑖}Ker(𝐴𝑗⊕(−𝐴𝑇𝑗));∀𝑖∈𝑝 is a necessary and sufficient condition for the set to be a set of commuting matrices 𝐀 being simpler to test (by taking advantage of the symmetry property of the commutators) than the equivalent condition đ‘Ŗ(𝐴𝑖⋂)∈𝑖∈𝑝Ker(𝐴𝑗⊕(−𝐴𝑇𝑗));∀𝑖∈𝑝. Further results about pair wise commuting matrices or the existence of nonzero commuting matrices with a given set are obtained in the subsequent result based on the Kronecker sum of relevant Jordan canonical forms.

Theorem 3.4. The following properties hold for any given set of 𝑛-square real matrices 𝐀={𝐴1,𝐴2,â€Ļ,𝐴𝑝}.
(i) The set đļ𝐀 of matrices đ‘‹âˆˆđ‘đ‘›Ã—đ‘› which commute with all matrices in 𝐀 is defined by: đļđ€îƒ¯âˆļ=đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∈𝑝𝑖=1đŊKer𝐴𝑖⊕−đŊ𝑇𝐴𝑖𝑃𝑖−1⊗𝑃𝑖−𝑇=îƒ¯î‚„î‚đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∈𝑝𝑖=1𝑃Im𝑖⊗𝑃𝑖−1𝑌𝑖∧𝑌𝑖đŊ∈Ker𝐴𝑖⊕−đŊ𝑇𝐴𝑖;∀𝑖∈𝑝=îƒ¯đ‘‹âˆˆđ‘đ‘›Ã—đ‘›âˆļđ‘Ŗ(𝑋)∈𝑝𝑖=1𝑃Im𝑖⊗𝑃𝑖−1(𝑌),𝑌∈𝑝𝑖=1đŊKer𝐴𝑖⊕−đŊ𝑇𝐴𝑖,(3.13) where đ‘ƒđ‘–âˆˆđ‘đ‘›Ã—đ‘› is a nonsingular transformation matrix such that 𝐴𝑖âˆŧđŊ𝐴𝑖=𝑃𝑖−1𝐴𝑖𝑃𝑖, đŊ𝐴𝑖 being the Jordan canonical form of 𝐴𝑖.
(ii) One has î€Ŋdimspanđ‘Ŗ(𝑋)âˆļ𝑋∈đļ𝐀≤min𝑖∈𝑝đŊdimKer𝐴𝑖⊕−đŊ𝑇𝐴𝑖=min𝑖∈𝑝𝜈𝑖(0)=min𝑖∈𝑝𝜌𝑖𝑗=1𝜈2𝑖𝑗îƒĒ≤min𝑖∈𝑝𝜌𝑖𝑖=1𝜇2𝑖𝑗îƒĒ≤min𝑖∈𝑝𝜇𝑖,(0)(3.14) where 𝜈𝑖(0) and 𝜈𝑖𝑗 are, respectively, the geometric multiplicities of 0∈𝜎(𝐴𝑖⊕(−𝐴𝑇𝑖)) and 𝜆𝑖𝑗∈𝜎(𝐴𝑖) and 𝜇𝑖(0) and 𝜇𝑖𝑗 are, respectively, the algebraic multiplicities of 0∈𝜎(𝐴𝑖⊕(−𝐴𝑇𝑖)) and 𝜆𝑖𝑗∈𝜎(𝐴𝑖); ∀𝑗∈𝜌𝑖 (the number of distinct eigenvalues of 𝐴𝑖), ∀𝑖∈𝑝.
(iii) The set 𝐀 consists of pair wise commuting matrices, namely đļ𝐀=MC𝐀, if and only if đ‘Ŗ(𝐴𝑗⋂)∈𝑝𝑖(≠𝑗)=1(Ker[(đŊ𝐴𝑖⊕(−đŊ𝑇𝐴𝑖))(𝑃𝑖−1⊗𝑃𝑖−𝑇)]); ∀𝑗∈𝑝. Equivalent conditions follow from the second and third equivalent definitions of đļ𝐀 in Property (i).

Theorems 3.3 and 3.4 are concerned with MC𝐀≠{0}âˆˆđ‘đ‘›Ã—đ‘› for an arbitrary set of real square matrices A and for a pair wise-commuting set, respectively.

4. Further Results and Extensions

The extensions of the results for commutation of complex matrices are direct in several ways. It is first possible to decompose the commutator in its real and imaginary part and then apply the results of Sections 2 and 3 for real matrices to both parts as follows. Let 𝐴=𝐴re+đĸ𝐴im and đĩ=đĩre+đĸđĩim be complex matrices in đ‚đ‘›Ã—đ‘› with 𝐴re and đĩre being their respective real parts, and 𝐴im and đĩim, all in đ‘đ‘›Ã—đ‘›, their respective imaginary parts, and √đĸ=−1 is the imaginary complex unity. Direct computations with the commutator of 𝐴 and đĩ yield []=𝐴𝐴,đĩî€ēre,đĩreî€ģ−î€ē𝐴im,đĩim𝐴î€ģ+đĸî€ēim,đĩreî€ģ+î€ē𝐴re,đĩimî€ģ.(4.1) The following three results are direct and allow to reduce the problem of commutation of a pair of complex matrices to the discussion of four real commutators.

Proposition 4.1. One has đĩ∈đļ𝐴⇔(([𝐴re,đĩre]=[𝐴im,đĩim⋀])([𝐴im,đĩre]=[đĩim,𝐴re])).

Proposition 4.2. One has (đĩre∈(đļ𝐴re∊đļ𝐴im)⋀đĩim∈(đļ𝐴im∊đļ𝐴re))⇒đĩ∈đļ𝐴.

Proposition 4.3. One has (𝐴re∈(đļđĩre∊đļđĩim)⋀𝐴im∈(đļđĩim∊đļđĩre))⇒đĩ∈đļ𝐴.

Proposition 4.1 yields to the subsequent result.

Theorem 4.4. The following properties hold.
(i) Assume that the matrices 𝐴 and đĩre are given. Then, đĩ∈đļ𝐴 if and only if đĩim satisfies the following linear algebraic equation: îƒŦ𝐴re⊕−𝐴𝑇re𝐴im⊕−𝐴𝑇imđ‘Ŗđĩre=îƒŦ𝐴im⊕−𝐴𝑇im𝐴re⊕−𝐴𝑇ređ‘Ŗđĩim(4.2) for which a necessary condition is îƒŦ𝐴rankim⊕−𝐴𝑇im𝐴re⊕−𝐴𝑇reîƒŦ𝐴=rankim⊕−𝐴𝑇im𝐴re⊕−𝐴𝑇re𝐴re⊕−𝐴𝑇re𝐴im⊕−𝐴𝑇imîƒĒđ‘Ŗđĩre.(4.3)
(ii) Assume that the matrices 𝐴 and đĩim𝑒 are given. Then, đĩ∈đļ𝐴 if and only if đĩre satisfies (4.2) for which a necessary condition is îƒŦ𝐴rankre⊕−𝐴𝑇re𝐴im⊕−𝐴𝑇imîƒŦ𝐴=rankre⊕−𝐴𝑇re𝐴im⊕−𝐴𝑇im𝐴im⊕−𝐴𝑇im𝐴re⊕−𝐴𝑇reîƒĒđ‘Ŗđĩim.(4.4)
(iii) Also, ∃đĩ≠0 such that đĩ∈đļ𝐴 with đĩre=0 and ∃đĩ≠0 such that đĩ∈đļ𝐴 with đĩim=0.

A more general result than Theorem 4.4 is the following.

Theorem 4.5. The following properties hold.
(i) đĩ∈đļđ´âˆŠđ‚đ‘›Ã—đ‘› if and only if đ‘Ŗ(đĩ) is a solution to the following linear algebraic system: îƒŦ𝐴re⊕−𝐴𝑇re−𝐴im⊕𝐴𝑇im𝐴im⊕−𝐴𝑇im−𝐴re⊕𝐴𝑇ređ‘ŖđĩîƒŦređ‘Ŗđĩim=0.(4.5) Nonzero solutions đĩ∈đļ𝐴, satisfying îƒŦđ‘Ŗđĩređ‘ŖđĩimîƒŦ𝐴∈Kerre⊕−𝐴𝑇re−𝐴im⊕𝐴𝑇im𝐴im⊕−𝐴𝑇im−𝐴re⊕𝐴𝑇re,(4.6) always exist since îƒŦ𝐴Kerre⊕−𝐴𝑇re−𝐴im⊕𝐴𝑇im𝐴im⊕−𝐴𝑇im−𝐴re⊕𝐴𝑇re≠{0}∈𝐑2𝑛2,(4.7) and equivalently, since îƒŦ𝐴rankre⊕−𝐴𝑇re−𝐴im⊕𝐴𝑇im𝐴im⊕−𝐴𝑇im−𝐴re⊕𝐴𝑇re<2𝑛2.(4.8)
(ii) Property (ii) is equivalent to đĩ∈đļ𝐴âŸē𝐴⊕−𝐴∗đ‘Ŗ(đĩ)=0(4.9) which has always nonzero solutions since rank(𝐴⊕(−𝐴∗))<𝑛2.

The various results of Section 3 for a set of distinct complex matrices to pair wise commute and for characterizing the set of complex matrices which commute with those in a given set may be discussed by more general algebraic systems like the above one with four block matrices îƒŦ𝐴𝑗re⊕−𝐴𝑇2re−𝐴𝑗im⊕𝐴𝑇𝑗im𝐴𝑗im⊕−𝐴𝑇2im−𝐴𝑗2re⊕𝐴𝑇𝑗re(4.10) for each 𝑗∈𝑝 in the whole algebraic system. Theorem 4.5 extends directly for sets of complex matrices commuting with a given one and complex matrices commuting with a set of commuting complex matrices as follows.

Theorem 4.6. The following properties hold.
(i) Consider the sets of nonzero distinct complex matrices 𝐀âˆļ={đ´đ‘–âˆˆđ‚đ‘›Ã—đ‘›âˆļ𝑖∈𝑝} and đļ𝐀âˆļ={đ‘‹âˆˆđ‚đ‘›Ã—đ‘›âˆļ[𝑋,𝐴𝑖]=0;𝐴𝑖∈𝐀,∀𝑖∈𝑝} for 𝑝â‰Ĩ2. Thus, đļ𝐀∋𝑋=𝑋re+đĸ𝑋re if and only if ⎡âŽĸâŽĸâŽĸâŽĸâŽĸâŽĸâŽĸâŽĸâŽĸâŽĸâŽŖ𝐴1re⊕−𝐴𝑇1re−𝐴1im⊕𝐴𝑇1im𝐴1im⊕−𝐴𝑇1im−𝐴1re⊕𝐴𝑇1re𝐴2re⊕−𝐴𝑇2re−𝐴2im⊕𝐴𝑇2im𝐴2im⊕−𝐴𝑇2im−𝐴2re⊕𝐴𝑇2re⋮𝐴𝑝re⊕−𝐴𝑇𝑝re−𝐴𝑝im⊕𝐴𝑇𝑝im𝐴𝑝im⊕−𝐴𝑇𝑝im−𝐴𝑝re⊕𝐴𝑇𝑝re⎤âŽĨâŽĨâŽĨâŽĨâŽĨâŽĨâŽĨâŽĨâŽĨâŽĨâŽĻîƒŦđ‘Ŗ𝑋ređ‘Ŗ𝑋im=0,(4.11) and a nonzero solution 𝑋∈đļ𝐀 exists since the rank of the coefficient matrix of (4.11) is less than 2𝑛2.
(ii) Consider the sets of nonzero distinct commuting complex matrices 𝐀đļâˆļ={đ´đ‘–âˆˆđ‚đ‘›Ã—đ‘›âˆļ𝑖∈𝑝} and MC𝐀âˆļ={đ‘‹âˆˆđ‚đ‘›Ã—đ‘›âˆļ[𝑋,𝐴𝑖]=0;𝐴𝑖∈𝐀,∀𝑖∈𝑝} for 𝑝â‰Ĩ2. Thus, MC𝐀∋𝑋=𝑋re+đĸ𝑋re if and only if đ‘Ŗ(𝑋r𝑒) and đ‘Ŗ(𝑋im) are solutions to (4.11).
(iii) Properties (i) and (ii) are equivalently formulated by from the algebraic set of complex equations: î€ē𝐴∗1⊕−𝐴1𝐴∗2⊕−𝐴2î€¸â‹¯đ´âˆ—đ‘âŠ•î€ˇâˆ’đ´đ‘î€¸î€ģ∗đ‘Ŗ(𝑋)=0.(4.12)

Remark 4.7. Note that all the proved results of Sections 2 and 3 are directly extendable for complex commuting matrices, by simple replacements of transposes by conjugate transposes, without requiring a separate decomposition in real and imaginary parts as discussed in Theorems 4.5(ii) and 4.6(iii).

Let 𝑓âˆļ𝐂→𝐂 be an analytic function in an open set 𝐷⊃𝜎(𝐴) for some matrix đ´âˆˆđ‚đ‘›Ã—đ‘› and let 𝑝(𝜆) be a polynomial fulfilling 𝑝(𝑖)(𝜆𝑘)=𝑓(𝑖)(𝜆𝑘); ∀𝑘∈𝜎(𝐴), ∀𝑖∈𝑚𝑘−1âˆĒ{0}; ∀𝑘∈𝜇 (the number of distinct elements in 𝜎(𝐴)), where 𝑚𝑘 is the index of 𝜆𝑘, that is, its multiplicity in the minimal polynomial of 𝐴. Then, 𝑓(𝐴) is a function of a matrix 𝐴 if 𝑓(𝐴)=𝑝(𝐴), [8]. Some results follow concerning the commutators of functions of matrices.

Theorem 4.8. Consider a nonzero matrix đĩ∈đļđ´âˆŠđ‚đ‘›Ã—đ‘› for any given nonzero đ´âˆˆđ‚đ‘›Ã—đ‘›. Then, 𝑓(đĩ)∈đļđ´âˆŠđ‚đ‘›Ã—đ‘›, and equivalently đ‘Ŗ(𝑓(đĩ))∈Ker(𝐴⊕(−𝐴∗)), for any function 𝑓âˆļđ‚đ‘›Ã—đ‘›â†’đ‚đ‘›Ã—đ‘› of the matrix đĩ.

The following corollaries are direct from Theorem 4.8 from the subsequent facts:

(1)𝐴∈đļ𝐴;âˆ€đ´âˆˆđ‚đ‘›Ã—đ‘›,(2)One has [][][]=[]=𝐴,đĩ=0⟹𝐴,𝑔(đĩ)=0⟹𝑓(𝐴),𝑔(đĩ)𝑝(𝐴),𝑔(đĩ)𝜇𝑖=0đ›ŧ𝑖î€ē𝐴𝑖î€ģ=,𝑔(đĩ)𝜇𝑖=0đ›ŧ𝑖𝐴𝑖−1[]𝐴,𝑔(đĩ)=0âŸē𝑔(đĩ)∈đļ𝑓(𝐴)âˆŠđ‚đ‘›Ã—đ‘›,(4.13) where 𝑓(𝐴)=𝑝(𝐴), from the definition of 𝑓 being a function of the matrix 𝐴, with 𝑝(𝜆) being a polynomial fulfilling 𝑝(𝑖)(𝜆𝑘)=𝑓(𝑖)(𝜆𝑘); ∀𝑘∈𝜎(𝐴), ∀𝑖∈𝑚𝑘−1âˆĒ{0}; ∀𝑘∈𝜇 (the number of distinct elements in 𝜎(𝐴)), where 𝑚𝑘 is the index of 𝜆𝑘, that is, its multiplicity in the minimal polynomial of 𝐴.(3)Theorem 4.8 is extendable for any countable set {𝑓𝑖(đĩ)} of matrix functions of đĩ.

Corollary 4.9. Consider a nonzero matrix đĩ∈đļđ´âˆŠđ‚đ‘›Ã—đ‘› for any given nonzero đ´âˆˆđ‚đ‘›Ã—đ‘›. Then, 𝑔(đĩ)∈đļ𝑓(𝐴)âˆŠđ‚đ‘›Ã—đ‘› for any function 𝑓âˆļđ‚đ‘›Ã—đ‘›â†’đ‚đ‘›Ã—đ‘› of the matrix 𝐴 and any function 𝑔âˆļđ‚đ‘›Ã—đ‘›â†’đ‚đ‘›Ã—đ‘› of the matrix đĩ.

Corollary 4.10. 𝑓(𝐴)∈đļđ´âˆŠđ‚đ‘›Ã—đ‘›, and equivalently đ‘Ŗ(𝑓(𝐴))∈Ker(𝐴⊕(−𝐴∗)), for any function 𝑓âˆļđ‚đ‘›Ã—đ‘›â†’đ‚đ‘›Ã—đ‘› of the matrix 𝐴.

Corollary 4.11. If đĩ∈đļđ´âˆŠđ‚đ‘›Ã—đ‘› then any countable set of function matrices {𝑓𝑖(đĩ)} is đļ𝐴 and in MC𝐴.

Corollary 4.12. Consider any countable set of function matrices đļ𝐹âˆļ={𝑓𝑖(𝐴);∀𝑖∈𝑝}⊂đļ𝐴 for any given nonzero đ´âˆˆđ‚đ‘›Ã—đ‘›. Then, ⋂𝑓𝑖∈đļ𝐹(Ker(𝑓𝑖(𝐴)⊕(−𝑓𝑖(𝐴∗))))⊃Ker(𝐴⊕(−𝐴∗)).

Note that matrices which commute and are simultaneously triangularizable through the same similarity transformation maintain a zero commutator after such a transformation is performed.

Theorem 4.13. Assume that đĩ∈đļđ´âˆŠđ‚đ‘›Ã—đ‘›. Thus, Λđĩ∈đļÎ›đ´âˆŠđ‚đ‘›Ã—đ‘› provided that there exists a nonsingular matrix đ‘‡âˆˆđ‚đ‘›Ã—đ‘› such that Λ𝐴=𝑇−1𝐴𝑇 and Λđĩ=𝑇−1đĩ𝑇.

A direct consequence of Theorem 4.13 is that if a set of matrices are simultaneously triangularizable to their real canonical forms by a common transformation matrix then the pair wise commuting properties are identical to those of their respective Jordan forms.

Appendices

A. Proofs of the Results of Section 2

Proof of Proposition 2.1. (i)-(ii) First note by inspection that ∅≠đļ𝐴⊃{0,𝐴}; âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›. Also, []𝐴,𝑋=𝐴𝑋−𝑋𝐴=𝐴⊗đŧ𝑛−đŧ𝑛⊗𝐴𝑇đ‘Ŗ=(𝑋)𝐴⊕−𝐴𝑇đ‘Ŗ(𝑋)=0⟹đ‘Ŗ(𝑋)∈Ker𝐴⊕−𝐴𝑇(A.1) and Proposition 2.1(i)-(ii) has been proved since there is an isomorphism 𝑓âˆļ𝐑𝑛2â†”đ‘đ‘›Ã—đ‘› defined by 𝑓(đ‘Ŗ(𝑋))=𝑋; âˆ€đ‘‹âˆˆđ‘đ‘›Ã—đ‘› for đ‘Ŗ(𝑋)=(đ‘Ĩ𝑇1,đ‘Ĩ𝑇2,â€Ļ,đ‘Ĩ𝑇𝑛)𝑇∈𝐑𝑛2 if đ‘Ĩ𝑇𝑖âˆļ=(đ‘Ĩ𝑖1,đ‘Ĩ𝑖2,â€Ļ,đ‘Ĩ𝑖𝑛) is the ith row of the square matrix 𝑋.
(iii) It is a direct consequence of Proposition 2.1(iii) and the symmetry property of the commutator of two commuting matrices đĩ∈đļ𝐴⇔[𝐴,đĩ]=[đĩ,𝐴]=0⇔𝐴∈đļđĩ.

Proof of Proposition 2.2. [𝐴,𝐴]=0;âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›â‡’âˆƒđ‘đ‘›2∋0≠đ‘Ŗ(𝐴)∈Ker(𝐴⊕(−𝐴𝑇)); âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›. As a result, Ker𝐴⊕−𝐴𝑇≠0∈𝐑𝑛2;âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›î€ˇî€ˇâŸērank𝐴⊕−𝐴𝑇<𝑛2;âˆ€đ´âˆˆđ‘đ‘›Ã—đ‘›(A.2) so that 0∈𝜎(𝐴⊕(−𝐴𝑇)).
Also, ∃𝑋(≠0)âˆˆđ‘đ‘›Ã—đ‘›âˆļ[𝐴,𝑋]=0⇔𝑋∈đļ𝐴 since Ker(𝐴⊕(−𝐴𝑇))≠0∈𝐑𝑛2.
Proposition 2.2 has been proved.

Proof of Theorem 2.3. (i) Note that 𝜎𝐴(𝐴)=𝜎𝑇⟹𝜎𝐴⊕−𝐴𝑇î€Ŋâˆļ=𝐂∋𝜂=𝜆𝑘−𝜆ℓ;∀𝜆𝑘,𝜆ℓ∈𝜎(𝐴);∀𝑘,ℓ∈𝑛=𝜎0𝐴⊕−𝐴𝑇âˆĒ𝜎0𝐴⊕−𝐴𝑇,(A.3) where 𝜎0𝐴⊕−𝐴𝑇=î€Ŋ𝜆∈𝜎𝐴⊕−𝐴𝑇,âˆļ𝜆=0𝜎0𝐴⊕−𝐴𝑇=î€Ŋ𝜆∈𝜎𝐴⊕−𝐴𝑇âˆļ𝜆≠0=𝜎𝐴⊕−𝐴𝑇â§ĩ𝜎0𝐴⊕−𝐴𝑇.(A.4) Furthermore, 𝜎(𝐴⊕(−𝐴𝑇))âˆļ={𝐂∋𝜆=𝜆𝑗−𝜆𝑖âˆļ𝜆𝑖,𝜆𝑗∈𝜎(𝐴);∀𝑖,𝑗∈𝑛} and 𝑧𝑖⊗đ‘Ĩ𝑗 is a right eigenvector of 𝐴⊕(−𝐴𝑇) associated with its eigenvalue 𝜆𝑗𝑖=𝜆𝑗−𝜆𝑖. 𝜆=𝜆𝑗−𝜆𝑖∈𝜎(𝐴⊕(−𝐴𝑇)) has algebraic and geometric multiplicities 𝜇𝑗𝑖 and 𝜈𝑗𝑖, respectively; ∀𝑖,𝑗∈𝑛, since đ‘Ĩ𝑗 and 𝑧𝑖 are, respectively, the right eigenvectors of 𝐴 and 𝐴𝑇 with associated eigenvalues 𝜆𝑗 and 𝜆𝑖;∀𝑖,𝑗∈𝑛.
Let đŊ𝐴 be the Jordan canonical form of 𝐴. It is first proved that there exists a nonsingular 𝑇∈𝐑𝑛2Ã—đ‘›2 such that đŊ𝐴⊕(−đŊ𝐴𝑇)=𝑇−1(𝐴⊕(−𝐴𝑇))𝑇. The proof is made by direct verification by using the properties of the Kronecker product, with 𝑇=𝑃⊗𝑃𝑇 for a nonsingular đ‘ƒâˆˆđ‘đ‘›Ã—đ‘› such that 𝐴âˆŧđŊ𝐴=𝑃−1𝐴𝑃, as follows: 𝑇−1𝐴⊕−𝐴𝑇𝑇=𝑃⊗𝑃𝑇−1𝐴⊗đŧ𝑛𝑃⊗𝑃𝑇−𝑃⊗𝑃𝑇−1đŧ𝑛⊗𝐴𝑇𝑃⊗𝑃𝑇=𝑃−1⊗𝑃𝐴𝑃−𝑇đŧ𝑛𝑃𝑇−𝑃−1đŧ𝑛𝑃⊗𝑃−𝑇𝐴𝑇𝑃𝑇=𝑃−1𝐴𝑃⊗đŧ𝑛−đŧ𝑛⊗𝑃−𝑇𝐴𝑇𝑃𝑇=đŊ𝐴⊗đŧ𝑛−đŧ𝑛⊗đŊ𝐴𝑇=đŊ𝐴⊗đŧ𝑛+đŧ𝑛⊗−đŊ𝐴𝑇=đŊ𝐴⊕−đŊ𝐴𝑇(A.5) and the result has been proved. Thus, rank(𝐴⊕(−𝐴𝑇))=rank(đŊ𝐴⊕(−đŊ𝐴𝑇)). It turns out that 𝑃 is, furthermore, unique except for multiplication by any nonzero real constant. Otherwise, if 𝑇≠𝑃⊗𝑃𝑇, then there would exist a nonsingular đ‘„âˆˆđ‘đ‘›Ã—đ‘› with 𝑄≠đ›ŧđŧ𝑛;∀đ›ŧ∈𝐑 such that 𝑇=𝑄(𝑃⊗𝑃𝑇)−1𝑄 so that 𝑇−1(𝐴⊕(−𝐴𝑇))𝑇≠đŊ𝐴⊕(−đŊ𝐴𝑇) provided that 𝑃⊗𝑃𝑇−1𝐴⊕−𝐴𝑇𝑃⊗𝑃𝑇=đŊ𝐴⊕−đŊ𝐴𝑇.(A.6) Thus, note that card𝜎𝐴⊕−𝐴𝑇=𝑛2=𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇𝑖îƒĒ2â‰Ĩ𝜇(0)=𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇2𝑖â‰Ĩ𝜈â‰Ĩ𝜈(0)=𝜇𝑖=1𝜈𝑖𝑖=𝜇𝑖=1𝜈2𝑖=𝜇𝜇𝑖=1𝑗=1𝜈𝑖𝑗îƒĒ2−2𝜇𝑖=10đ‘Ĩ0200𝑑𝜇𝑗(≠𝑖)=1𝜈𝑖𝑗=𝜈−2𝜇𝑖=10đ‘Ĩ0200𝑑𝜇𝑗(≠𝑖)=1𝜈𝑖𝑗â‰Ĩ𝑛.(A.7) Those results follow directly from the properties of the Kronecker sum 𝐴⊕đĩ of n-square real matrices 𝐴 and đĩ=−𝐴𝑇 since direct inspection leads to the following.(1)0∈𝜎(𝐴⊕(−𝐴𝑇)) with algebraic multiplicity ∑𝜇(0)â‰Ĩ𝜇𝑖=1𝜇𝑖𝑖=∑𝜇𝑖=1𝜇2𝑖â‰Ĩ∑𝜇𝑖=1𝜈2𝑖â‰Ĩ𝑛 since there are at least ∑𝑛𝑖=1𝜇2𝑖 zeros in 𝜎(𝐴⊕(−𝐴𝑇)) (i.e., the algebraic multiplicity of 0∈𝜎(𝐴⊕(−𝐴𝑇)) is at least ∑𝑛𝑖=1𝜇2𝑖) and since 𝜈𝑖â‰Ĩ1; ∀𝑖∈𝑛. Also, a simple computation of the number of eigenvalues of 𝐴⊕(−𝐴𝑇) yields card𝜎(𝐴⊕(−𝐴𝑇))=𝑛2=∑𝜇𝑖=1𝜇𝑖𝑖∑=(𝜇𝑖=1𝜇𝑖)2.(2)The number of linearly independent vectors in 𝑆 is ∑𝜈=𝜇𝑖=1∑𝜇𝑗=1𝜈𝑖𝑗∑=(𝜇𝑖=1𝜈𝑖)2â‰Ĩ∑𝜇𝑖=1𝜈𝑖𝑖=∑𝜇𝑖=1𝜈2𝑖 since the total number of Jordan blocks in the Jordan canonical form of 𝐴 is ∑𝜇𝑖=1𝜈𝑖.(3)The number of Jordan blocks associated with 0∈𝜎(𝐴⊕(−𝐴𝑇)) in the Jordan canonical form of (𝐴⊕(−𝐴𝑇)) is ∑𝜈(0)=𝜇𝑖=1đ‘Ŗ2𝑖≤𝜈, with 𝜈𝑖𝑖=𝜈2𝑖𝑖; ∀𝑖∈𝑛. Thus, card𝜎0𝐴⊕−𝐴𝑇=𝜇𝑖=1𝜇𝑖𝑖=𝜇𝑖=1𝜇2𝑖,card𝜎0𝐴⊕−𝐴𝑇=𝑛2−𝜇𝑖=1𝜇2𝑖,rank𝐴⊕−𝐴𝑇=𝑛2−𝜈(0)=𝑛2−𝜇𝑖=1đ‘Ŗ2𝑖,dimKer𝐴⊕−𝐴𝑇=𝜈(0)=𝜇𝑖=1đ‘Ŗ2𝑖.(A.8)(4)There are at least 𝜈(0) linearly independent vectors in 𝑆âˆļ=span{𝑧𝑖⊗đ‘Ĩ𝑗,∀𝑖,𝑗∈𝑛}. Also, the total number of Jordan blocks in the Jordan canonical form of (𝐴⊕(−𝐴𝑇)) is ∑𝜈=dim𝑆=(𝜇𝑖=1∑𝜇𝑗=1𝜈𝑖𝑗∑)=(𝜇𝑖=1𝜈𝑖)2=∑𝜈(0)+2𝜇𝑖=1∑𝜇𝑗(≠𝑖)=1𝜈𝑖𝑗â‰Ĩ𝜈(0).
Property (i) has been proved. Property (ii) follows directly from the orthogonality in 𝐑𝑛2of its range and null subspaces.

Proof of Theorem 2.4. First note from Proposition 2.1 that 𝑋∈đļ𝐴 if and only if (𝐴⊕(−𝐴𝑇))đ‘Ŗ(𝑋)=0 since đ‘Ŗ(𝑋)∈Ker(𝐴⊕(−𝐴𝑇)). Note also from Proposition 2.1 that 𝑋∈đļ𝐴 if and only if đ‘Ŗ(𝑋)∈Im(𝐴⊕(−𝐴𝑇)). Thus, 𝑋∈đļ𝐴 if and only if đ‘Ŗ(𝑋) is a solution to the algebraic compatible linear system: 𝐴⊕−𝐴𝑇đ‘Ŗ(𝑋)=đ‘Ŗ(𝑀)(A.9) for any 𝑀(≠0)âˆˆđ‘đ‘›Ã—đ‘› such that rank𝐴⊕−𝐴𝑇=rank𝐴⊕−𝐴𝑇,đ‘Ŗ(𝑀)=𝑛2−𝜈(0).(A.10) From Theorem 2.3, the nullity and the rank of 𝐴⊕(−𝐴𝑇) are, respectively, dimKer(𝐴⊕(−𝐴𝑇))=𝜈(0)rank(𝐴⊕(−𝐴𝑇))=𝑛2−𝜈(0). Therefore, there exist permutation matrices 𝐸,𝐹∈𝐑𝑛2Ã—đ‘›2 such that there exists an equivalence transformation: 𝐴⊕−𝐴𝑇≈𝐴âˆļ=𝐸𝐴⊕−𝐴𝑇𝐹=Blockmatrix𝐴𝑖𝑗;𝑖,𝑗∈2(A.11) such that 𝐴11 is square nonsingular and of order 𝜈0. Define 𝑀=𝐸𝑀𝐹≈𝑀(≠0)âˆˆđ‘đ‘›Ã—đ‘›. Then, the linear algebraic systems (𝐴⊕(−𝐴𝑇))đ‘Ŗ(𝑋)=đ‘Ŗ(𝑀), and 𝐸𝐴⊕−𝐴𝑇𝐹đ‘Ŗ𝑋=îƒŦ𝐴11𝐴12𝐴21𝐴22⎡âŽĸâŽĸâŽŖđ‘Ŗ𝑋1đ‘Ŗ𝑋2⎤âŽĨâŽĨâŽĻ=⎡âŽĸâŽĸâŽŖđ‘Ŗ𝑀1đ‘Ŗ𝑀2⎤âŽĨâŽĨâŽĻ,𝑉𝑋1=𝐴−111đ‘Ŗ𝑀1−𝐴12đ‘Ŗ𝑋2âŸē𝐴22−𝐴21𝐴−111𝐴12𝑉𝑋2=đ‘Ŗ𝑀2−𝐴21𝐴−111đ‘Ŗ𝑀1(A.12) are identical if 𝑋 and 𝑀 are defined according to đ‘Ŗ(𝑋)=𝐹đ‘Ŗ(𝑋) and đ‘Ŗ(𝑀)=𝐸đ‘Ŗ(𝑀). As a result, Properties (i) and (ii) follow directly from (A.12) for 𝑀=𝑀=0 and for any 𝑀 satisfying rank(𝐴⊕(−𝐴𝑇))=rank(𝐴⊕(−𝐴𝑇),đ‘Ŗ(𝑀))=𝑛2−𝜈(0), respectively.

B. Proofs of the Results of Section 3

Proof of Proposition 3.1. (i) The first part of Property (i) follows directly from Proposition 2.1 since all the matrices of 𝐀đļ pair wise commute and any arbitrary matrix commutes with itself (thus 𝑗=𝑖 may be removed from the intersections of kernels of the first double sense implication). The last part of Property (i) follows from the antisymmetric property of the commutator [𝐴𝑖,𝐴𝑗]=[𝐴𝑗,𝐴𝑖]=0;∀𝐴𝑖,𝐴𝑗∈𝐀đļ what implies 𝐴𝑖∈𝐀đļ;∀𝑖∈𝑝⇔đ‘Ŗ(𝐴𝑖⋂)∈𝑖+1≤𝑗≤𝑝Ker(𝐴𝑗⊕(−𝐴𝑇𝑗));∀𝐴𝑖,𝐴𝑗∈𝐀đļ.
(ii) It follows from its equivalence with Property (i) since Ker𝑁𝑖(𝐀đļ⋂)≡𝑗(≠𝑖)∈𝑝Ker(𝐴𝑗⊕(−𝐴𝑇𝑗)).
(iii) Property (iii) is similar to Property (i) for the whose set 𝑀𝐀đļ of matrices which commute with the set 𝐀đļ so that it contains 𝐀đļ and, furthermore, Ker𝑁(𝐀đļ⋂)≡𝑖∈𝑝Ker(𝐴𝑖⊕(−𝐴𝑇𝑖)).
(iv) It follows from ⋃𝑗∈𝑝Im(𝐴𝑗⊕(−𝐴𝑇𝑗))=⋂𝑗∈𝑝Ker(𝐴𝑗⊕(−𝐴𝑇𝑗));𝐴𝑗∈𝐀đļ and 𝐑𝑛2∋0∈Ker(𝐴𝑗⊕(−𝐴𝑇𝑗))∊Im(𝐴𝑗⊕(−𝐴𝑇𝑗)) but đ‘đ‘›Ã—đ‘›âˆ‹đ‘‹=0 commutes with any matrix in đ‘đ‘›Ã—đ‘› so that đ‘đ‘›Ã—đ‘›âˆ‹0∈MC𝐀đļâ‡”đ‘đ‘›Ã—đ‘›âˆ‹0∉MC𝐀đļ for any given 𝐀đļ.
(v) and (vi) are similar to (ii)–(iv) except that the members of 𝐀 do not necessarily commute.

Proof of Proposition 3.2. It is a direct consequence from Proposition 3.1(i)-(ii) since the existence of nonzero pair wise commuting matrices (all the members of 𝐀đļ) implies that the above matrices 𝑁(𝐀đļ),𝑁𝑖(𝐀đļ),𝐴𝑗⊕(−𝐴𝑇𝑗) are all rank defective and have at least identical number of rows than that of columns. Therefore, the square matrices 𝑁𝑇(𝐀đļ)𝑁(𝐀đļ),𝑁𝑇𝑖(𝐀đļ)𝑁𝑖(𝐀đļ), and 𝐴𝑗⊕(−𝐴𝑇𝑗) are all singular.

Proof of Theorem 3.3. (i) Any nonzero matrix Λ=diag(đœ†đœ†â‹¯đœ†), 𝜆(≠0)∈𝐑 is such that Λ(≠0)∈đļ𝐴𝑖(∀𝑖∈𝑝) so that Λ∈đļ𝐀. Thus, 0≠đ‘Ŗ(Λ)∈Ker𝑁(𝐀)⇔𝑛2>𝑟𝑎𝑛𝑘𝑁(𝐀)â‰Ĩrank𝑁𝑖(𝐀)â‰Ĩrank(𝐴𝑖⊕(−𝐴𝑖)); ∀𝑖∈𝑝 and any given set 𝐀. Property (i) has been proved.
(ii) The first part follows by contradiction. Assume ⋂𝑖∈𝑝Ker(𝐴𝑖⊕(−𝐴𝑇𝑖))={0} then 0≠đ‘Ŗ(Λ)∉Ker𝑁(𝐀) so that Λ=diag(đœ†đœ†â‹¯đœ†)∉đļ𝐀, for any 𝜆(≠0)∈𝐑 what contradicts (i). Also, 𝑋∈đļ𝐴𝑖⇔đ‘Ŗ(𝑋)∈Ker(𝐴𝑖⊕(−𝐴𝑇𝑖)); ∀𝑖∈𝑝 so that 𝑋∈đļ𝐀⋂⇔đ‘Ŗ(𝑋)∈𝑖∈𝑝Ker(𝐴𝑖⊕(−𝐴𝑇𝑖)) what is equivalent to its contrapositive logic proposition 𝑋∈đļ𝐀⋃⇔đ‘Ŗ(𝑋)∈𝑖∈𝑝Im(𝐴𝑖⊕(−𝐴𝑇𝑖)).
(iii) Let 𝐀=𝐀đļ⇔𝐴𝑖∈đļ𝐴𝑗;∀𝑗(≠𝑖)∈𝑝,∀𝑖∈𝑝⇔𝐴𝑖∈đļ𝐴𝑗;∀𝑗,𝑖∈𝑝 since 𝐴𝑖∈đļ𝐴𝑖;∀𝑖∈𝑝đ‘Ŗ𝐴𝑖∈𝑖∈𝑝𝐴Ker𝑗⊕−𝐴𝑇𝑗;∀𝑖∈𝐴𝑝âŸēđ‘Ŗ𝑖∈𝑖∈𝑝â§ĩ{𝑖}𝐴Ker𝑗⊕−𝐴𝑇𝑗;∀𝑖∈𝑝.(B.1) On the other hand, ⎛⎜⎜⎝đ‘Ŗ𝐴𝑖∈𝑗∈𝑝â§ĩ𝑖𝐴Ker𝑗⊕−𝐴𝑇𝑗𝐴âŸēđ‘Ŗ𝑖∈đļ𝐴𝑗;∀𝑗∈𝑝⎞⎟⎟⎠forany𝑖(<𝑝)∈𝑝.(B.2) This assumption implies directly that đ‘Ŗ𝐴𝑖∈đļ𝐴𝑗;∀𝑗∈𝐴𝑝∧đ‘Ŗ𝑖+1∈𝑗∈𝑖+1đļ𝐴𝑗forany𝑖(<𝑝)∈𝑝(B.3) which together with đ‘Ŗ(𝐴𝑖+1⋂)∈𝑗∈𝑝â§ĩ𝑖+1Ker(𝐴𝑗⊕(−𝐴𝑇𝑗)) implies that đ‘Ŗ𝐴𝑖+1∈đļ𝐴𝑗;∀𝑗∈⎛⎜⎜⎝đ‘Ŗ𝐴𝑝⟹𝑖+1∈𝑗∈𝑝â§ĩ𝑖+1𝐴Ker𝑗⊕−𝐴𝑇𝑗⎞⎟⎟⎠for(𝑖+1)∈𝑝.(B.4) Thus, it follows by complete induction that 𝐀=𝐀đļ⇔đ‘Ŗ(𝐴𝑖⋂)∈𝑖∈𝑝â§ĩ{𝑖}Ker(𝐴𝑗⊕(−𝐴𝑇𝑗));∀𝑖∈𝑝 and Property (iii) has been proved.
(iv) The definition of M𝐀đļ follows from Property (iii) in order to guarantee that [𝑋,𝐴𝑖]=0;