`ISRN Applied MathematicsVolume 2011, Article ID 578352, 12 pageshttp://dx.doi.org/10.5402/2011/578352`
Research Article

## On Generalized Rotation Matrices

1Faculty of Physics, Adam Mickiewicz University, ul. Umultowska 85, 61-614 Poznań, Poland
2Department of Statistics, Dortmund University of Technology, Vogelpothsweg 87, 44221 Dortmund, Germany

Received 11 March 2011; Accepted 17 April 2011

Academic Editors: D. Kuhl and K. Takaba

Copyright © 2011 Oskar Maria Baksalary and Götz Trenkler. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

A general class of matrices, covering, for instance, an important set of proper rotations, is considered. Several characteristics of the class are established, which deal with such notions and properties as determinant, eigenspaces, eigenvalues, idempotency, Moore-Penrose inverse, or orthogonality.

#### 1. Introduction and Basic Properties

Let denote the set of complex matrices. The symbols , , , and will stand for the transpose, conjugate transpose, column space, and rank, respectively, of . Further, will be the Moore-Penrose inverse of , that is, the unique matrix satisfying the equations and will mean the identity matrix of order . The Moore-Penrose inverse is useful in representing the orthogonal (in the sense of the standard inner product) projectors onto and , denoted by and , as well as the orthogonal projectors onto the orthogonal complements of these subspaces, denoted by and . To be precise, for , With respect to a scalar, say , the inverse is defined as: when and when .

The considerations of the present paper concern matrices and vectors having either complex or real entries; in the latter case their subsets will be denoted by and , respectively. Customarily, the symbol will stand for the Euclidean norm of , that is, . Let be generated by . The matrix can be used to define the vector cross product in , with for any ; see [1]. Further properties of are listed in the following lemma, whose proof is easy and thus omitted.

Lemma 1.1. Let and . Moreover, let for some be of the form (1.3). Then, (i),(ii), (iii), (iv), (v), (vi), (vii), (viii), (ix), (x),(xi), (xii), (xiii).

Relationships listed in Lemma 1.1 are available in the literature; see Trenkler [2, 3], Groß et al. [4], Bernstein [5, Ch. 3], and G. Trenkler and D. Trenkler [6]. It is noteworthy that the three scalars involved in point (x) represent the scalar triple products, for example, . Moreover, the right-hand sides equalities in points (xi) and (xii) are known as the Grassmann and Lagrange identities, respectively. From the point of view of the present paper, conditions (iii)–(v) of the lemma are of particular importance and will be extensively utilized in the subsequent derivations.

It is known that if generating given in (1.3) is such that , then the matrices of the form describe proper rotations in with being the angle of rotation about the axis given by ; see Noble [7, Ch. 12] or Murray et al. [8, Ch. 2]. In what follows we consider a more general class of matrices than the one spanned by matrices of the form (1.4), namely, It can be verified that covers all proper rotations of the form (1.4). Moreover, it comprises also improper rotations, that is, orthogonal matrices with determinant equal to −1 [9, Ch. VIII], symmetric elementary matrices [10, Sec.  1], and all matrices commuting with [11].

The purpose of the present paper is to identify various properties of the class of matrices specified in (1.5). As a result, several characteristics of the class are established, dealing with such notions and properties as idempotency, determinant, eigenvalues, Moore-Penrose inverse, orthogonality, or eigenspaces.

#### 2. Results

Subsequently, the symbol is interpreted as . The theorem below states that is closed against multiplication.

Theorem 2.1. Let with , . Then .

Proof. Direct calculations show that establishing the assertion.

It is clear that, besides ensuring the closure property, multiplication in is also associative. Furthermore, for each there exists the identity element, namely, . On the other hand, since includes also singular matrices, not every has an inverse element in . Thus, the set is a semigroup under (matrix) multiplication. (As will be seen subsequently, the Moore-Penrose inverse of every belongs to , in particular, for each nonsingular .)

The next theorem provides necessary and sufficient conditions for .

Theorem 2.2. Let . Then is idempotent if and only if

Proof. The equivalence established in the theorem follows straightforwardly from Theorem 2.1.

In view of Theorem 2.2, we can distinguish two subsets of idempotent matrices belonging to corresponding to and . In the former of them, , , and . Hence, since , it follows that , with (Note that matrices , , were considered by Trenkler [3] to characterize certain eigenspaces.) On the other hand, if , then Theorem 2.2 entails four cases, namely, , ; , ; , ; and , , leading to , , , and , respectively.

The next task is to characterize the eigenvalues of matrices belonging to . The subsequent theorem expresses the determinant of in terms of the scalars , , and . Its proof is based on the so-called Leverrier-Sourian-Frame algorithm, which provides a useful tool to calculate the coefficients in a characteristic polynomial. Since the algorithm is not widely known, it is restated in the following lemma; see for example, Meyer [12, page 504]. Customarily, denotes the trace of a matrix argument.

Lemma 2.3. Let , and let be the characteristic equation for . Then where and , .

Theorem 2.4. Let , and let . Then .

Proof. From Lemma 2.3 it follows that is given by , whence it is seen that takes the form Further, if , then , where Since , in consequence we get , leading to Finally, if , then , where Hence, straightforward calculations yield . Combining this result with the property completes the proof.

By virtue of Theorem 2.4, it is easy to determine the spectrum of matrices belonging to .

Theorem 2.5. Let . Then the eigenvalues of are solutions to the equation .

Proof. It is clear that Hence, on account of Theorem 2.4, we get establishing the assertion.

Theorem 2.5 leads to what follows.

Corollary 2.6. Let . Then the eigenvalues of are

An expected result originating from Corollary 2.6 is that , with given in Theorem 2.4. Furthermore, when , , and , then the eigenvalues given in (2.11) reduce to , , and , that is, to the eigenvalues of ; see [3, Theorem  2].

The following theorem will be useful in the subsequent calculations of the Moore-Penrose inverses of .

Theorem 2.7. Let be such that , with and of form (1.3) generated by nonzero . Moreover, let , with , and . Then, (i),(ii)the eigenvalues of are , , ,(iii)if , then , ,(iv)if , , then , ,(v)if , , then , .

Proof. Assertion (i) follows from Theorem 2.4 by setting . Statement (ii) is a consequence of (i), whereas the validity of points (iii) and (v) can be confirmed by straightforward calculations; see [3, Theorem  1]. For the proof of statement (iv) note that , imply , that is, is purely imaginary. Taking this fact into account, in view of , the validity of the formula for given in point (iv) is seen by direct verification of conditions (1.1). Similarly, the formula for provided in point (v) can be confirmed by examining the condition . The proof is thus complete, for the expressions for given in points (iv) and (v) are easily obtainable.

Note that regardless whether and/or in Theorem 2.7 are zero or not, the matrix is such that , or, in other words, , that is, is an EP matrix. Another observation is that by setting , in Theorem 2.7 leads to the relationship Hence, the so called Cayley transform of (see [13, p. 219]), being of the form , of which it is known that is orthogonal (see [9, Theorem  8.1.10]), takes the form Since , the matrix represents in fact a proper rotation.

We now have the tools necessary to establish formulae for the Moore-Penrose inverses of .

Theorem 2.8. Let be decomposed as , where . Moreover, let and . Then, (i)if , then ,(ii)if , , , then ,(iii)if , , , then ,(iv)if , , , then ,(v)if , , , then .

Proof. The first observation is that if , then on account of Theorem  3.1.1 in [14], we have . Hence, by utilizing and , the formula for given in point (i) follows.
Assume now that , in which case we can still have or . In the former of these situations, point (iv) of Theorem 2.7 implies , whence , or, in other words, . This inclusion is clearly satisfied also when , for then .
In order to apply the results of Baksalary et al. in [15], we introduce , . Then, , that is, is a rank-one modification of . As in [15], we define also the vectors according to and denote the squares of the norms of the first two of them by and , that is, , . As is seen from Theorem 2.7, the scalar specified in [15] by now takes the form .
Let us first consider case (ii) of the theorem, characterized, in addition to , by , . On account of Theorem  1.1 in [15] this case corresponds to . As can be directly verified with the use of given in point (iv) of Theorem 2.7, and , from where and . Moreover, we get , implying and . Furthermore, and . Thus, In consequence, from formula (2.1) in [15] we get , whence the expression for claimed in point (ii) of the theorem follows.
According to Theorem  1.1 in [15], another case which corresponds to is given in point (iv) of the theorem, where , . Direct calculations with the use of given in point (v) of Theorem 2.7 show that formulae (2.15) remain valid also in this case. Hence, from relationship (2.1) in [15] we obtain , leading to the expression claimed in point (iv) of the theorem.
Another conclusion originating from Theorem  1.1 in [15] is that case (iii) of the theorem, in which , , corresponds to . Direct calculations with the use of given in point (iv) of Theorem 2.7 show that and substituting this relationship into formula (2.2) in [15] leads to the expression for given in the theorem.
Case (v), in which , , is left to be considered. According to the remark on p. 210 in [15], in such a situation is nonsingular. Hence, , and the validity of the formula given in the theorem can be confirmed by direct verifications of the condition .

A conclusion originating from Theorem 2.8 is that every satisfies , the property which was already mentioned in the remark following Theorem 2.1. Further consequences of Theorem 2.8 are in what follows.

Corollary 2.9. Let , and let , . Then , where (i)if , then ,(ii)if , , , then provided that and provided that ,(iii)if , , , then provided that and provided that ,(iv)if , , , then ,(v)if , , , then ,with , , as specified in (2.3).

Proof. The corollary is established by direct calculations.

Point (v) of Theorem 2.8 enables to formulate necessary and sufficient conditions for to be orthogonal.

Theorem 2.10. Let with nonzero . Moreover, let . Then is orthogonal if and only if

Proof. The matrix is orthogonal if and only if it is nonsingular and . On account of point (v) of Theorem 2.8, it is seen that is equivalent to or, in other words, and . Taking into account that implies , the assertion follows.

Observe that the right-hand side condition in (2.17) admits two possibilities, namely, either or . Since , Theorem 2.4 ensures that in the former situation is a proper rotation, whereas in the latter one is an improper rotation.

The next theorem concerns eigenspaces attributed to .

Theorem 2.11. Let , and let , , be the eigenspaces of associated with its eigenvalues given in (2.11). Then, (i)if , then , ,(ii)if , then provided that and otherwise and, simultaneously, provided that and otherwise(iii)if , then ,(iv)if , then provided that , provided that , and otherwise,where , , with as specified in (2.3).

Proof. It is known that , where , ; see, for example, [3]. Clearly, for each , matrix can be written as where . For , we have . By virtue of the equivalence , statement (i) of Corollary 2.9 leads to point (iii) of the theorem. If, however, , that is, , then , which means that cases (iii) and (v) of Corollary 2.9, characterized by , are not attainable in the present situation. Further observations are that and . In view of these facts, it is seen that statements (ii) and (iv) of Corollary 2.9 lead to characterizations of given in point (iv) of the theorem.
Next we consider eigenvalue , which ensures that occurring in (2.19) is given by . Since is equivalent to , on account of statement (i) of Corollary 2.9 we arrive at characterization of given in point (i) of the theorem. On the other hand, if , that is, , then is necessarily equal to zero, which means that cases (iv) and (v) of Corollary 2.9 are to be excluded from the present considerations. Furthermore, it is seen that and . With these facts taken into account, we conclude that statements (ii) and (iii) of Corollary 2.9 lead to characterizations of provided in point (ii) of the theorem.
The last eigenvalue to be considered is , for which . In this case, analogous arguments to the ones used with respect to lead to the eigenspaces in points (i) and (ii) of the theorem. The proof is complete.

Observe that if , , and , then from Theorem 2.11 we get , , and , that is, the eigenspaces of identified by Trenkler [3, Sec. 3].

Theorem 2.11 is supplemented with examples demonstrating its applicability. Let . Then, leading to of eigenvalues , , . From the right-hand side formula in (2.20) we get and the projectors , , involved in Theorem 2.11 are of the forms Hence, from Theorem 2.11 we obtain what follows.

If , then provided that and otherwise.

Next, if , then provided that and otherwise and, simultaneously, provided that and otherwise.

Further, if , then provided that and otherwise.

Finally, if , then provided that , provided that , and otherwise.

Further consequences of Theorem 2.11 deal with the proper rotation matrices; for more detailed discussion see [16]. G. Trenkler and D. Trenkler [6, Sec. 3] identified three types of proper rotations, namely, Type I, covering matrices of the form ; Type II, covering matrices of the form , where ; Type III, covering matrices of the form , where . Moreover, it was pointed out in [6] that each proper rotation can be attributed to one of these types. Direct calculations show that rotations of Type I are obtained from the representation (1.5) by taking , , ; Type II are obtained from the representation (1.5) by taking , , , and ; Type III are obtained from the representation (1.5) by taking , , , where . Combining these observations with Corollary 2.6 leads to the conclusion that rotations of Type I have eigenvalues , , ; Type II have eigenvalues , , ; Type III have eigenvalues , , . Furthermore, from Theorem 2.11 we obtain the following characterizations of the eigenspaces.

Corollary 2.12. Let be a proper rotation, and let , , be the eigenspaces of associated with its eigenvalues . Then, (i)if is of Type I, then , , ,(ii)if is of Type II, then , , ,(iii)if is of Type III, then , , , where , , with as specified in (2.3).

#### References

1. T. G. Room, “The composition of rotations in euclidean three-space,” The American Mathematical Monthly, vol. 59, pp. 688–692, 1952.
2. G. Trenkler, “Vector equations and their solutions,” International Journal of Mathematical Education in Science and Technology, vol. 29, no. 3, pp. 455–459, 1998.
3. G. Trenkler, “The vector cross product from an algebraic point of view,” Discussiones Mathematicae. General Algebra and Applications, vol. 21, no. 1, pp. 67–82, 2001.
4. J. Groß, G. Trenkler, and S.-O. Troschke, “The vector cross product in ${ℂ}^{\text{3}}$,” International Journal of Mathematical Education in Science and Technology, vol. 30, no. 4, pp. 549–555, 1999.
5. D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas, Princeton University Press, Princeton, NJ, USA, 2nd edition, 2009.
6. G. Trenkler and D. Trenkler, “On the product of rotations,” International Journal of Mathematical Education in Science and Technology, vol. 39, no. 1, pp. 94–104, 2008.
7. B. Noble, Applied Linear Algebra, Prentice Hall, Englewood Cliffs, NJ, USA, 1969.
8. R. N. Murray, Z. X. Li, and S. S. Sastry, A Mathematical Introduction to Robotic Manipulation, CRC Press, Boca Raton, Fla, USA, 1994.
9. L. Mirsky, An Introduction to Linear Algebra, Dover, New York, NY, USA, 1990.
10. A. S. Householder, The Theory of Matrices in Numerical Analysis, Dover, New York, NY, USA, 1964.
11. D. Trenkler and G. Trenkler, “Problem 29–12. Matrices commuting with the vector cross product,” IMAGE—The Bulletin of the International Linear Algebra Society, vol. 29, p. 35, 2002.
12. C. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, Pa, USA, 2000.
13. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985.
14. S. L. Campbell and C. D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, UK, 1979.
15. J. K. Baksalary, O. M. Baksalary, and G. Trenkler, “A revisitation of fomulae for the Moore-Penrose inverse of modified matrices,” Linear Algebra and Its Applications, vol. 372, pp. 207–224, 2003.
16. O. M. Baksalary and G. Trenkler, “Eigenspaces of the proper rotation matrices,” International Journal of Mathematical Education in Science and Technology, vol. 41, no. 6, pp. 827–829, 2010.