The Scientific World Journal

Volume 2013 (2013), Article ID 640350, 4 pages

http://dx.doi.org/10.1155/2013/640350

## A Note on Decomposing a Square Matrix as Sum of Two Square Nilpotent Matrices over an Arbitrary Field

^{1}Department of Mathematics, Harbin Institute of Technology, Harbin 150001, China^{2}School of Mathematical Sciences, Heilongjiang University, Harbin 150080, China

Received 31 August 2013; Accepted 1 October 2013

Academic Editors: A. Badawi and P. Bracken

Copyright © 2013 Xiaofei Song et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Let be an arbitrary field and a square matrix over . Then is sum of two square nilpotent matrices over if and only if, for every algebraic extension of and arbitrary nonzero , there exist idempotent matrices and over such that .

#### 1. Introduction

Botha (see [1]) proved that a square matrix over a field is a sum of two nilpotent matrices over if and only if is similar to a particular form. In an early paper, Pazzis (see [2]) gave necessary and sufficient conditions in which a matrix can be decomposed as a linear combination of two idempotents with given nonzero coefficients. The goal of this paper is to build a bridge that connects the result obtained in [1] with the result obtained in [2]. However, the relation between these two facts has not been formally discussed yet (more details in [3–9]).

If there is no statement, the meanings of notations mentioned in this paragraph hold all over the paper. denotes an arbitrary field, is its algebraic closure, is an arbitrary algebraic extension of , and is the characteristic of . denotes the set of all positive integers, for some . denotes the space consisting of all matrices over ; . is the rank of . denotes a vector space over and is the dimension of . is called in if there exist square nilpotent and such that , while is called an composite in if there exist idempotent and such that , where , (Definition 1 in [2]); in particular, is called if is an composite in for every algebraic extension of and arbitrary nonzero (when , we still use for the meaning of composites).

For , on the one hand, we will prove that is in implies is ; that is, the form provided by Botha satisfies the condition as in [2]; on the other hand, we will also prove that is implies is in ; that is, we can derive the form provided in [1] from the results obtained in [2]. In fact, the following theorem is the main result of this paper.

Theorem 1 (main theorem). * Suppose is an arbitrary field and ; then is in if and only if is . *

In Section 2, we will state some related theorems and notations from [2] and we will give some necessary corollaries. The proof of Theorem 1 will be carried out in Section 3.

#### 2. More Notations and Necessary Corollaries

Suppose and , we denote by the following matrix with :

*Notation 1 (Notation 2 in [2]). * Let , and ; we denote by(1) the number of blocks of size for the eigenvalue in the Jordan reduction of ; (2) the number of blocks of size greater or equal to for the eigenvalue in the Jordan reduction of .

*Definition 2 (Definition 3 in [2]). *Two sequences and are side to be intertwined if , , and .

*Notation 2 (Notation 4 in [2]). * Given a monic polynomial, , denote the following by its companion matrix:

Theorem 3 (Theorem 1 in [2]). * Assume and let . Then is an composite if and only if all the following conditions hold.*(1)*The sequences and are intertwined; *(2)* and for all , . *

Theorem 4 (Theorem 5 in [2]). * Assume and let . Then is an composite if and only if for every , all blocks in the Jordan reduction of with respect to have an even size. *

Suppose is , where . Then is composite and composite in for some algebraic extension of , where , with . By Theorem 3, the following statements are true: (1) and for all , ; (2), . so and , .

On the other hand, note that for nonzero with , the sequences and are intertwined if , . Then , , implies that for every algebraic extension of and arbitrary nonzero , is an composite in ; that is, is .

Therefore the following corollary is true.

Corollary 5. *Assume and let . Then is if and only if , . *

Similarly, we can derive the following corollary from Theorem 4.

Corollary 6. * Assume and let . Then is if and only if for every , all blocks in the Jordan reduction of with respect to have an even size. *

Naturally, we derive the following corollary from the above two corollaries.

Corollary 7. * Every nilpotent is . *

In fact, arbitrary nilpotent is not only but also .

Lemma 8. * Every nilpotent is . *

*Proof. *For arbitrary field , let is nilpotent; then is similar to , where for every , , , and both the characteristic polynomial and minimal polynomial of are . Furthermore, is similar to as follows:
That is, .

When is even, ; when is odd, . Note that both and are square nilpotent matrices then is , and is follows. Hence is .

#### 3. Proof of Main Theorem

. Suppose is in ; that is, there exist square nilpotent matrices and such that . It will take two steps to prove is .

*Step 1. * If is nonsingular, then is .

Since with , inspect the eigenspaces of and . Note that and are square nilpotent matrices, their ranks satisfy the following inequality matrices.
where equality holds if and only if .

At first, is nonsingular implies is not its eigenvalue. Secondly, if the inequality is strict, then intersection of eigenspaces of and contains nonzero vectors; that is, there exists nonzero such that , which implies that is one of eigenvalues of . This is a contradiction. Hence, ; that is, is even and and are similar but not equal.

Because is square nilpotent with , we can choose linear independent vectors from the set of its column vectors which can make up a base of eigenspace of and denote by the matrix consisting of these columns. Correspondingly, we have matrix with all columns from the set of columns of . Because is the only vector in the intersection of eigenspaces of and , matrix is nonsingular.

implies that nonzero column vectors of are eigenvectors of and implies . Hence; and are equal under certain column transformation; that is, there is an invertible matrix such that . Correspondingly, there is an invertible matrix such that .

Let be the inverse of , where and are matrices. Naturally, the following equation is true:
Now, we carry out the same similarity transformation on and as follows:
Note that and , the above three equations imply that is similar to and is similar to .

Hence, is similar to . For every algebraic extension of and arbitrary nonzero , is also similar to the following matrix:
That is, is .

*Step 2. * If is singular and similar to , where is nonsingular and is nilpotent. Then is .

At first, we need to prove that is . Without loss of generality, we assume in the following proof since holds under similarity transformations.

Let , where the order of is the same for and the order of is the same for . Then implies the following equations are true:

Since , we get the following equations after replacing with and with in the previous equations:

We can derive the following equations from the 3rd and 4th equations in the above two sets of equations:

Note that is nilpotent, assume its index is ; that is, and . After multiplying the right side of equation by , we can get . is nonsingular implies . Repeat the operation, we eventually get . Similarly, we can also get .

So is quasidiagonal and is also quasidiagonal through similar proof; that is, and are square nilpotent same as the corresponding parts of . Finally, we prove that is .

Since is by Step 1 and is by Corollary 7, it is true that is .

. Suppose is . If is similar to , where is nonsingular and is nilpotent, then is if and only if is by Corollaries 5, 6, and 7. Without loss of generality, we can assume is nonsingular. Furthermore, if is nonsingular and similar to , where all eigenvalues of are not in and all eigenvalues of are in . Then is if and only if is and is . It will take two steps to prove is .

*Step 3. * Suppose and all eigenvalues of are not in ; then for arbitrary nonzero , is an composite; that is, there exist idempotent matrices and such that .

Let be the eigenspace of with respect to , the eigenspace of with respect to , let be the eigenspace of with respect to , and the eigenspace of with respect to . Both and are not eigenvalues of implies that ; then (otherwise, implies or , etc.); that is, is even.

Suppose and are matrices with satisfying and ; then is nonsingular matrix. Let be its inverse; that is,
Then we carry out the same similarity transformation on and as follows:
where and are idempotent implies that and are idempotent and implies that and . Hence, is similar to the following matrix:
That is, is in .

When , is composite for arbitrary nonzero , we can similarly prove that is in replacing with in the previous proof.

*Step 4. * Suppose and all eigenvalues of are in ; then by Corollary 5, for every and arbitrary nonzero .

Moreover, is similar to , where both the characteristic polynomial and the minimal polynomial of are with and is one of eigenvalues of for every . Without loss of generality, we just need to prove is .

Since is similar to as follows:
where . We have . Obviously, both and are square nilpotent matrices; that is, is . Hence, is in .

When , all blocks in the Jordan reduction of with respect to have an even size by Corollary 6; that is, both the characteristic polynomial and minimal polynomial of every block with respect to are , where is even. Similarly, we can also prove that is in .

#### References

- J. D. Botha, “Sums of two square-zero matrices over an arbitrary field,”
*Linear Algebra and Its Applications*, vol. 436, no. 3, pp. 516–524, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. D. Pazzis, “On linear combinations of two idempotent matrices over an arbitrary field,”
*Linear Algebra and Its Applications*, vol. 433, no. 3, pp. 625–636, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. E. Hartwig and M. S. Putcha, “When is a matrix a difference of two idempotents?”
*Linear and Multilinear Algebra*, vol. 26, no. 4, pp. 267–277, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. H. Wang and P. Y. Wu, “Sums of square-zero operators,”
*Studia Mathematica*, vol. 99, no. 2, pp. 115–127, 1991. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - F. R. Gantmacher,
*The Theory of Matrices*, vol. 1, Chelsea, New York, NY, USA, 1959. - H. Flanders, “Elementary divisors of $AB$ and $BA$,”
*Proceedings of the American Mathematical Society*, vol. 2, no. 6, pp. 871–874, 1951. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - W. E. Roth, “The equations $AX-YB=C$ and $AX-XB=C$ in matrices,”
*Proceedings of the American Mathematical Society*, vol. 3, pp. 392–396, 1952. View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - C. Pearcy and D. Topping, “Sums of small numbers of idempotents,”
*The Michigan Mathematical Journal*, vol. 14, pp. 453–465, 1967. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - R. E. Hartwig and M. S. Putcha, “When is a matrix a sum of idempotents?”
*Linear and Multilinear Algebra*, vol. 26, no. 4, pp. 279–286, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet