Table of Contents
ISRN Applied Mathematics
Volume 2011, Article ID 578352, 12 pages
http://dx.doi.org/10.5402/2011/578352
Research Article

On Generalized Rotation Matrices

1Faculty of Physics, Adam Mickiewicz University, ul. Umultowska 85, 61-614 Poznań, Poland
2Department of Statistics, Dortmund University of Technology, Vogelpothsweg 87, 44221 Dortmund, Germany

Received 11 March 2011; Accepted 17 April 2011

Academic Editors: D. Kuhl and K. Takaba

Copyright © 2011 Oskar Maria Baksalary and Götz Trenkler. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A general class of matrices, covering, for instance, an important set of proper rotations, is considered. Several characteristics of the class are established, which deal with such notions and properties as determinant, eigenspaces, eigenvalues, idempotency, Moore-Penrose inverse, or orthogonality.

1. Introduction and Basic Properties

Let 𝑚,𝑛 denote the set of 𝑚×𝑛 complex matrices. The symbols 𝐋, 𝐋, (𝐋), and rk(𝐋) will stand for the transpose, conjugate transpose, column space, and rank, respectively, of 𝐋𝑚,𝑛. Further, 𝐋𝑛,𝑚 will be the Moore-Penrose inverse of 𝐋𝑚,𝑛, that is, the unique matrix satisfying the equations 𝐋𝐋𝐋=𝐋,𝐋𝐋𝐋=𝐋,𝐋𝐋=𝐋𝐋,𝐋𝐋=𝐋𝐋,(1.1) and 𝐈𝑛 will mean the identity matrix of order 𝑛. The Moore-Penrose inverse 𝐋 is useful in representing the orthogonal (in the sense of the standard inner product) projectors onto (𝐋) and (𝐋), denoted by 𝐏𝐋 and 𝐏𝐋, as well as the orthogonal projectors onto the orthogonal complements of these subspaces, denoted by 𝐐𝐋 and 𝐐𝐋. To be precise, for 𝐋𝑚,𝑛, 𝐏𝐋=𝐋𝐋,𝐏𝐋=𝐋𝐋,𝐐𝐋=𝐈𝑚𝐋𝐋,𝐐𝐋=𝐈𝑛𝐋𝐋.(1.2) With respect to a scalar, say 𝛼, the inverse 𝛼 is defined as: 𝛼=0 when 𝛼=0 and 𝛼=𝛼1 when 𝛼0.

The considerations of the present paper concern 3×3 matrices and 3×1 vectors having either complex or real entries; in the latter case their subsets will be denoted by 3,3 and 3,1, respectively. Customarily, the symbol 𝐥 will stand for the Euclidean norm of 𝐥3,1, that is, 𝐥=𝐥𝐥. Let 𝐓𝐚=0𝑎3𝑎2𝑎30𝑎1𝑎2𝑎10(1.3) be generated by 𝐚=(𝑎1,𝑎2,𝑎3)3,1. The matrix 𝐓𝐚 can be used to define the vector cross product in 3,1, with 𝐓𝐚𝐥=𝐚×𝐥 for any 𝐥3,1; see [1]. Further properties of 𝐓𝐚 are listed in the following lemma, whose proof is easy and thus omitted.

Lemma 1.1. Let 𝜆,𝜇 and 𝐚,𝐛,𝐜,𝐝3,1. Moreover, let 𝐓𝐥 for some 𝐥3,1 be of the form (1.3). Then, (i)𝐓𝜆𝐚+𝜇𝐛=𝜆𝐓𝐚+𝜇𝐓𝐛,(ii)𝐓𝐚𝐛=𝐓𝐛𝐚, (iii)𝐓𝐚=𝐓𝐚, (iv)𝐓𝐚𝐚=𝟎, (v)𝐓𝐚𝐓𝐛=𝐛𝐚(𝐚𝐛)𝐈𝟑, (vi)(𝐓𝐚𝐓𝐛)=𝐓𝐛𝐓𝐚, (vii)𝐓𝐚𝐓𝐛𝐓𝐜=𝐓𝐚𝐜𝐛(𝐛𝐜)𝐓𝐚, (viii)𝐓𝐚𝐓𝐛𝐓𝐚=(𝐚𝐛)𝐓𝐚, (ix)𝐓𝟑𝐚=𝐚𝟐𝐓𝐚, (x)𝐜𝐓𝐚𝐛=𝐚𝐓𝐛𝐜=𝐛𝐓𝐜𝐚,(xi)𝐓𝐚𝐓𝐛𝐜=(𝐚𝐜)𝐛(𝐚𝐛)𝐜=𝐚×(𝐛×𝐜), (xii)(𝐓𝐚𝐛)𝐓𝐜𝐝=(𝐚𝐜)(𝐛𝐝)(𝐚𝐝)(𝐛𝐜)=(𝐚×𝐛)(𝐜×𝐝), (xiii)𝐓𝐓𝐚𝐛=𝐛𝐚𝐚𝐛=𝐓𝐚𝐓𝐛𝐓𝐛𝐓𝐚.

Relationships listed in Lemma 1.1 are available in the literature; see Trenkler [2, 3], Groß et al. [4], Bernstein [5, Ch. 3], and G. Trenkler and D. Trenkler [6]. It is noteworthy that the three scalars involved in point (x) represent the scalar triple products, for example, 𝐜𝐓𝐚𝐛=(𝐚×𝐛)𝐜. Moreover, the right-hand sides equalities in points (xi) and (xii) are known as the Grassmann and Lagrange identities, respectively. From the point of view of the present paper, conditions (iii)–(v) of the lemma are of particular importance and will be extensively utilized in the subsequent derivations.

It is known that if 𝐚3,1 generating 𝐓𝐚 given in (1.3) is such that 𝐚=1, then the matrices of the form 𝑅=sin𝜃𝐓𝐚+𝐈𝟑+(1cos𝜃)𝐓𝟐𝐚(1.4) describe proper rotations in 3,1 with 𝜃 being the angle of rotation about the axis given by 𝐚; see Noble [7, Ch. 12] or Murray et al. [8, Ch. 2]. In what follows we consider a more general class of matrices than the one spanned by matrices of the form (1.4), namely, Υ=𝐓3,3𝐓=𝛼𝐓𝐚+𝛽𝐈𝟑+𝛾𝐚𝐚,𝛼,𝛽,𝛾,𝐚3,1,𝐚0.(1.5) It can be verified that Υ covers all proper rotations of the form (1.4). Moreover, it comprises also improper rotations, that is, orthogonal matrices with determinant equal to −1 [9, Ch. VIII], symmetric elementary matrices [10, Sec.  1], and all matrices commuting with 𝐓𝐚 [11].

The purpose of the present paper is to identify various properties of the class of matrices specified in (1.5). As a result, several characteristics of the class Υ are established, dealing with such notions and properties as idempotency, determinant, eigenvalues, Moore-Penrose inverse, orthogonality, or eigenspaces.

2. Results

Subsequently, the symbol 𝑖 is interpreted as 𝑖=1. The theorem below states that Υ is closed against multiplication.

Theorem 2.1. Let 𝐓1,𝐓2Υ with 𝐓𝑘=𝛼𝑘𝐓𝐚+𝛽𝑘𝐈𝟑+𝛾𝑘𝐚𝐚, 𝑘=1,2. Then 𝐓1𝐓2Υ.

Proof. Direct calculations show that 𝐓1𝐓2=𝛼1𝛽2+𝛽1𝛼2𝐓𝐚+𝛽1𝛽2𝛼1𝛼2𝐚2𝐈𝟑+𝛼1𝛼2+𝛽1𝛾2+𝛽2𝛾1+𝛾1𝛾2𝐚2𝐚𝐚,(2.1) establishing the assertion.

It is clear that, besides ensuring the closure property, multiplication in Υ is also associative. Furthermore, for each 𝐓Υ there exists the identity element, namely, 𝐈3. On the other hand, since Υ includes also singular matrices, not every 𝐓Υ has an inverse element in Υ. Thus, the set Υ is a semigroup under (matrix) multiplication. (As will be seen subsequently, the Moore-Penrose inverse of every 𝐓Υ belongs to Υ, in particular, 𝐓Υ𝐓1Υ for each nonsingular 𝐓Υ.)

The next theorem provides necessary and sufficient conditions for 𝐓2=𝐓.

Theorem 2.2. Let 𝐓Υ. Then 𝐓 is idempotent if and only if 2𝛼𝛽=𝛼,𝛽2𝛼2𝐚2=𝛽,𝛼2+2𝛽𝛾+𝛾2𝐚2=𝛾.(2.2)

Proof. The equivalence established in the theorem follows straightforwardly from Theorem 2.1.

In view of Theorem 2.2, we can distinguish two subsets of idempotent matrices belonging to Υ corresponding to 𝛼0 and 𝛼=0. In the former of them, 𝛼=±(𝑖/2𝐚), 𝛽=1/2, and 𝛾=±(1/2𝐚2). Hence, since 𝐚=(1/𝐚2)𝐚, it follows that 𝐓{𝐏1,𝐏2,𝐏3,𝐏4}, with 𝐏1=12𝑖𝐓𝐚𝐚+𝐈3+𝐏𝐚,𝐏2=12𝑖𝐓𝐚𝐚+𝐐𝐚,𝐏3=12𝑖𝐓𝐚𝐚+𝐈3+𝐏𝐚,𝐏4=12𝑖𝐓𝐚𝐚+𝐐𝐚.(2.3) (Note that matrices 𝐏𝑘, 𝑘=1,,4, were considered by Trenkler [3] to characterize certain eigenspaces.) On the other hand, if 𝛼=0, then Theorem 2.2 entails four cases, namely, 𝛽=0, 𝛾=0; 𝛽=0, 𝛾=1/𝐚2; 𝛽=1, 𝛾=0; and 𝛽=1, 𝛾=1/𝐚2, leading to 𝐏5=0, 𝐏6=𝐏𝐚, 𝐏7=𝐈3, and 𝐏8=𝐐𝐚, respectively.

The next task is to characterize the eigenvalues of matrices belonging to Υ. The subsequent theorem expresses the determinant of 𝐓Υ in terms of the scalars 𝛼, 𝛽, and 𝛾. Its proof is based on the so-called Leverrier-Sourian-Frame algorithm, which provides a useful tool to calculate the coefficients in a characteristic polynomial. Since the algorithm is not widely known, it is restated in the following lemma; see for example, Meyer [12, page 504]. Customarily, tr() denotes the trace of a matrix argument.

Lemma 2.3. Let 𝐋𝑛,𝑛, and let 𝜇𝑛+𝑐1𝜇𝑛1+𝑐2𝜇𝑛2++𝑐𝑛=0 be the characteristic equation for 𝐋. Then 𝑐1=tr(𝐋),𝑐𝑘1=𝑘tr𝐋𝐁k1,𝑘=2,3,,𝑛,(2.4) where 𝐁1=𝑐1𝐈𝑛+𝐋 and 𝐁𝑘=𝑐𝑘𝐈𝑛+𝐋𝐁𝑘1, 𝑘=2,3,,𝑛1.

Theorem 2.4. Let 𝐓Υ, and let 𝜏=𝛼2𝐚2+𝛽2. Then det(𝐓)=𝜏(𝛽+𝛾𝐚2).

Proof. From Lemma 2.3 it follows that 𝑐1=tr(𝐓) is given by 𝑐1=(3𝛽+𝛾𝐚2), whence it is seen that 𝐁1=𝑐1𝐈3+𝐓 takes the form 𝐁1=𝛼𝐓𝐚2𝛽+𝛾𝐚2𝐈3+𝛾𝐚𝐚.(2.5) Further, if 𝑘=2, then 𝑐2=(1/2)tr(𝐓𝐁1), where 𝐓𝐁1=𝛼𝛽+𝛼𝛾𝐚2𝐓𝐚𝛼2𝐚2+2𝛽2+𝛽𝛾𝐚2𝐈3+𝛼2𝛽𝛾𝐚𝐚.(2.6) Since tr(𝐓𝐚)=0, in consequence we get 𝑐2=𝛼2𝐚2+3𝛽2+2𝛽𝛾𝐚2, leading to 𝐁2=𝑐2𝐈3+𝐓𝐁1=𝛼𝛽+𝛼𝛾𝐚2𝐓𝐚+𝛽2+𝛽𝛾𝐚2𝐈3+𝛼2𝛽𝛾𝐚𝐚.(2.7) Finally, if 𝑘=3, then 𝑐3=(1/3)tr(𝐓𝐁2), where 𝐓𝐁2=𝛼2𝛽𝐚2+𝛼2𝛾𝐚4+𝛽3+𝛽2𝛾𝐚2𝐈3.(2.8) Hence, straightforward calculations yield 𝑐3=𝜏(𝛽+𝛾𝐚2). Combining this result with the property det(𝐓)=(1)3𝑐3 completes the proof.

By virtue of Theorem 2.4, it is easy to determine the spectrum of matrices belonging to Υ.

Theorem 2.5. Let 𝐓Υ. Then the eigenvalues of 𝐓 are solutions 𝜇 to the equation [𝛼2𝐚2+(𝛽𝜇)2][𝛽𝜇+𝛾𝐚2]=0.

Proof. It is clear that det𝐓𝜇𝐈3=det𝛼𝐓𝐚+(𝛽𝜇)𝐈3+𝛾𝐚𝐚.(2.9) Hence, on account of Theorem 2.4, we get det𝐓𝜇𝐈3𝛼=det2𝐚2+(𝛽𝜇)2𝛽𝜇+𝛾𝐚2,(2.10) establishing the assertion.

Theorem 2.5 leads to what follows.

Corollary 2.6. Let 𝐓Υ. Then the eigenvalues of 𝐓 are 𝜇1=𝛽+𝛾𝐚2,𝜇2=𝛽+𝑖𝛼𝐚2,𝜇3=𝛽𝑖𝛼𝐚.(2.11)

An expected result originating from Corollary 2.6 is that 𝜇1𝜇2𝜇3=det(𝐓), with det(𝐓) given in Theorem 2.4. Furthermore, when 𝛼=1, 𝛽=0, and 𝛾=0, then the eigenvalues given in (2.11) reduce to 𝜇1=0, 𝜇2=𝑖𝐚, and 𝜇3=𝑖𝐚, that is, to the eigenvalues of 𝐓𝐚; see [3, Theorem  2].

The following theorem will be useful in the subsequent calculations of the Moore-Penrose inverses of 𝐓Υ.

Theorem 2.7. Let 𝐀3,3 be such that 𝐀=𝛼𝐓𝐚+𝛽𝐈3, with 𝛼,𝛽 and 𝐓𝐚 of form (1.3) generated by nonzero 𝐚3,1. Moreover, let 𝜆=1+𝛾𝐚𝐀𝐚, with 𝛾, and 𝜏=𝛼2𝐚2+𝛽2. Then, (i)det(𝐀)=𝛽𝜏,(ii)the eigenvalues of 𝐀 are 𝜎1=𝛽, 𝜎2=𝛽+𝑖𝛼𝐚, 𝜎3=𝛽𝑖𝛼𝐚,(iii)if 𝛽=0, then 𝐀=(𝛼/𝐚2)𝐓𝐚, 𝜆=1,(iv)if 𝛽0, 𝜏=0, then 𝐀=(1/4𝛽)((𝛼/𝛽)𝐓𝐚+𝐈3+3𝐏𝐚), 𝜆=1+(𝛾/𝛽)𝐚2,(v)if 𝛽0, 𝜏0, then 𝐀1=(1/𝜏)(𝛼𝐓𝐚+𝛽𝐈3+(𝛼2/𝛽)𝐚𝐚), 𝜆=1+(𝛾/𝛽)𝐚2.

Proof. Assertion (i) follows from Theorem 2.4 by setting 𝛾=0. Statement (ii) is a consequence of (i), whereas the validity of points (iii) and (v) can be confirmed by straightforward calculations; see [3, Theorem  1]. For the proof of statement (iv) note that 𝛽0, 𝜏=0 imply 𝛼/𝛽=±(𝑖/𝐚), that is, 𝛼/𝛽 is purely imaginary. Taking this fact into account, in view of 𝐚=(1/𝐚2)𝐚, the validity of the formula for 𝐀 given in point (iv) is seen by direct verification of conditions (1.1). Similarly, the formula for 𝐀𝟏 provided in point (v) can be confirmed by examining the condition 𝐀𝐀𝟏=𝐈3. The proof is thus complete, for the expressions for 𝜆 given in points (iv) and (v) are easily obtainable.

Note that regardless whether 𝛽 and/or 𝜏 in Theorem 2.7 are zero or not, the matrix 𝐀 is such that 𝐏𝐀=𝐏𝐀, or, in other words, (𝐀)=(𝐀), that is, 𝐀 is an EP matrix. Another observation is that by setting 𝛼=1, 𝛽=1 in Theorem 2.7 leads to the relationship 𝐓𝐚+𝐈31=1𝐚2+1𝐓𝐚+𝐈3+𝐚𝐚.(2.12) Hence, the so called Cayley transform of 𝐓𝐚 (see [13, p. 219]), being of the form 𝐒=(𝐓𝐚+𝐈3)(𝐓𝐚+𝐈3)1, of which it is known that is orthogonal (see [9, Theorem  8.1.10]), takes the form 1𝐒=𝐚2+12𝐓𝐚+1𝐚2𝐈3+2𝐚𝐚.(2.13) Since det(𝐒)=1, the matrix 𝐒 represents in fact a proper rotation.

We now have the tools necessary to establish formulae for the Moore-Penrose inverses of 𝐓Υ.

Theorem 2.8. Let 𝐓Υ be decomposed as 𝐓=𝐀+𝛾𝐚𝐚, where 𝐀=𝛼𝐓𝐚+𝛽𝐈3. Moreover, let 𝜆=1+𝛾𝐚𝐀𝐚 and 𝜏=𝛼2𝐚2+𝛽2. Then, (i)if 𝛽=0, then 𝐓=(1/𝐚2)(𝛼𝐓𝐚+𝛾𝐏𝐚),(ii)if 𝛽0, 𝜏=0, 𝜆=0, then 𝐓=(1/4𝛽)((𝛼/𝛽)𝐓𝐚+𝐐𝐚),(iii)if 𝛽0, 𝜏=0, 𝜆0, then 𝐓=(1/4𝛽)((𝛼/𝛽)𝐓𝐚+𝐈3+((3𝛽𝛾𝐚2)/(𝛽+𝛾𝐚2))𝐏𝐚),(iv)if 𝛽0, 𝜏0, 𝜆=0, then 𝐓=(1/𝜏)(𝛼𝐓𝐚+𝛽𝐐𝐚),(v)if 𝛽0, 𝜏0, 𝜆0, then 𝐓1=(1/𝜏)(𝛼𝐓𝐚+𝛽𝐈3+((𝛼2𝛽𝛾)/(𝛽+𝛾𝐚2))𝐚𝐚).

Proof. The first observation is that if 𝛽=0, then on account of Theorem  3.1.1 in [14], we have 𝐓=(𝛼𝐓𝐚)+(𝛾𝐚𝐚). Hence, by utilizing 𝐓𝐚=(1/𝐚2)𝐓𝐚 and (𝐚𝐚)=(1/𝐚2)𝐏𝐚, the formula for 𝐓 given in point (i) follows.
Assume now that 𝛽0, in which case we can still have 𝜏=0 or 𝜏0. In the former of these situations, point (iv) of Theorem 2.7 implies 𝐏𝐀=(1/2)((𝛼/𝛽)𝐓𝐚+𝐈3+𝐏𝐚), whence 𝐏𝐀𝐚=𝐚, or, in other words, 𝐚(𝐀). This inclusion is clearly satisfied also when 𝜏0, for then det(𝐀)0.
In order to apply the results of Baksalary et al. in [15], we introduce 𝐛=𝛾𝐚, 𝐜=𝐚. Then, 𝐓=𝐀+𝐛𝐜=𝐀+𝐛𝐜, that is, 𝐓 is a rank-one modification of 𝐀. As in [15], we define also the vectors 𝐝,𝐞,𝐟,𝐠3,1 according to 𝐝=𝐀𝐀𝐛,𝐞=𝐜,𝐟=𝐐𝐀𝐛,𝐠=𝐐𝐀𝐜,(2.14) and denote the squares of the norms of the first two of them by 𝛿 and 𝜂, that is, 𝛿=𝐝2, 𝜂=𝐞2. As is seen from Theorem 2.7, the scalar 𝜆 specified in [15] by 𝜆=1+𝐜𝐀𝐛 now takes the form 𝜆=1+𝛾𝐚𝐀𝐚=1+(𝛾/𝛽)𝐚2.
Let us first consider case (ii) of the theorem, characterized, in addition to 𝛽0, by 𝜏=0, 𝜆=0. On account of Theorem  1.1 in [15] this case corresponds to rk(𝐓)=rk(𝐀)1. As can be directly verified with the use of 𝐀 given in point (iv) of Theorem 2.7, 𝐝=(𝛾/𝛽)𝐚 and 𝐞=(1/𝛽)𝐚, from where 𝛿=|𝛾/𝛽|2𝐚2 and 𝜂=(1/|𝛽|2)𝐚2. Moreover, we get 𝐀𝐞=(1/|𝛽|2)𝐚, implying 𝐝𝐀𝐞=(𝛾/|𝛽|2𝛽)𝐚2 and 𝐀𝐞𝐞=(1/|𝛽|2𝛽)𝐚𝐚. Furthermore, 𝐝𝐝=(|𝛾/𝛽|2)𝐚𝐚 and 𝐝𝐞=(𝛾/𝛽2)𝐚𝐚. Thus, 𝛿1𝐝𝐝=𝐏𝐚,𝜂1𝐀𝐞𝐞=1𝛽𝐏𝐚,𝛿1𝜂1𝐝𝐀𝐞𝐝𝐞=1𝛽𝐏𝐚.(2.15) In consequence, from formula (2.1) in [15] we get 𝐓=𝐐𝐚𝐀, whence the expression for 𝐓 claimed in point (ii) of the theorem follows.
According to Theorem  1.1 in [15], another case which corresponds to rk(𝐓)=rk(𝐀)1 is given in point (iv) of the theorem, where 𝜏0, 𝜆=0. Direct calculations with the use of 𝐀1 given in point (v) of Theorem 2.7 show that formulae (2.15) remain valid also in this case. Hence, from relationship (2.1) in [15] we obtain 𝐓=𝐐𝐚𝐀1, leading to the expression claimed in point (iv) of the theorem.
Another conclusion originating from Theorem  1.1 in [15] is that case (iii) of the theorem, in which 𝜏=0, 𝜆0, corresponds to rk(𝐓)=rk(𝐀). Direct calculations with the use of 𝐀 given in point (iv) of Theorem 2.7 show that 𝜆1𝐝𝐞=𝛾𝛽𝛽+𝛾𝐚2𝐚𝐚,(2.16) and substituting this relationship into formula (2.2) in [15] leads to the expression for 𝐓 given in the theorem.
Case (v), in which 𝜏0, 𝜆0, is left to be considered. According to the remark on p. 210 in [15], in such a situation 𝐓 is nonsingular. Hence, 𝐓=𝐓1, and the validity of the formula given in the theorem can be confirmed by direct verifications of the condition 𝐓𝐓1=𝐈3.

A conclusion originating from Theorem 2.8 is that every 𝐓Υ satisfies 𝐓Υ𝐓Υ, the property which was already mentioned in the remark following Theorem 2.1. Further consequences of Theorem 2.8 are in what follows.

Corollary 2.9. Let 𝐓Υ, and let 𝜆=1+𝛾𝐚𝐀𝐚, 𝜏=𝛼2𝐚2+𝛽2. Then 𝐏𝐓=𝐏𝐓, where (i)if 𝛽=0, then 𝐏𝐓=𝛼𝛼𝐐𝐚+𝛾𝛾𝐏𝐚,(ii)if 𝛽0, 𝜏=0, 𝜆=0, then 𝐏𝐓=𝐏2 provided that 𝛼/𝛽=𝑖/𝐚 and 𝐏𝐓=𝐏4 provided that 𝛼/𝛽=𝑖/𝐚,(iii)if 𝛽0, 𝜏=0, 𝜆0, then 𝐏𝐓=𝐏1 provided that 𝛼/𝛽=𝑖/𝐚 and 𝐏𝐓=𝐏3 provided that 𝛼/𝛽=𝑖/𝐚,(iv)if 𝛽0, 𝜏0, 𝜆=0, then 𝐏𝐓=𝐐𝐚,(v)if 𝛽0, 𝜏0, 𝜆0, then 𝐏𝐓=𝐈3,with 𝐏𝑘, 𝑘=1,,4, as specified in (2.3).

Proof. The corollary is established by direct calculations.

Point (v) of Theorem 2.8 enables to formulate necessary and sufficient conditions for 𝐓Υ to be orthogonal.

Theorem 2.10. Let 𝐓Υ with nonzero 𝛼,𝛽,𝛾. Moreover, let 𝜏=𝛼2𝐚2+𝛽2. Then 𝐓Υ is orthogonal if and only if 𝜏=1,𝛽+𝛾𝐚22=1.(2.17)

Proof. The matrix 𝐓 is orthogonal if and only if it is nonsingular and 𝐓1=𝐓. On account of point (v) of Theorem 2.8, it is seen that 𝐓1=𝐓 is equivalent to 𝛼𝛼=𝜏𝛽,𝛽=𝜏𝛼,𝛾=2𝛽𝛾𝛽+𝛾𝐚2,(2.18) or, in other words, 𝜏=1 and 𝛾(𝛽+𝛾𝐚2)=𝛼2𝛽𝛾. Taking into account that 𝜏=1 implies 𝛼2=(1/𝐚2)(1𝛽2), the assertion follows.

Observe that the right-hand side condition in (2.17) admits two possibilities, namely, either 𝛽+𝛾𝐚2=1 or 𝛽+𝛾𝐚2=1. Since 𝜏=1, Theorem 2.4 ensures that in the former situation 𝐓 is a proper rotation, whereas in the latter one 𝐓 is an improper rotation.

The next theorem concerns eigenspaces attributed to 𝐓Υ.

Theorem 2.11. Let 𝐓Υ, and let (𝜇𝑗), 𝑗=1,2,3, be the eigenspaces of 𝐓 associated with its eigenvalues 𝜇𝑗 given in (2.11). Then, (i)if 𝛼=0, then (𝜇2)=(𝐈3𝛾𝛾𝐏𝐚), (𝜇3)=(𝐈3𝛾𝛾𝐏𝐚),(ii)if 𝛼0, then (𝜇2)=(𝐐2) provided that 𝛾/𝛼=𝑖/𝐚 and (𝜇2)=(𝐐1) otherwise and, simultaneously, (𝜇3)=(𝐐4) provided that 𝛾/𝛼=(𝑖/𝐚) and (𝜇3)=(𝐐3) otherwise(iii)if 𝛾=0, then (𝜇1)=(𝐈3𝛼𝛼𝐐𝐚),(iv)if 𝛾0, then (𝜇1)=(𝐐2) provided that 𝛼/𝛾=𝑖𝐚, (𝜇1)=(𝐐4) provided that 𝛼/𝛾=𝑖𝐚, and (𝜇1)=(𝐚) otherwise,where 𝐐𝑘=𝐈3𝐏𝑘, 𝑘=1,,4, with 𝐏𝑘 as specified in (2.3).

Proof. It is known that (𝜇𝑗)=[𝐈3𝐓(𝜇𝑗)𝐓(𝜇𝑗)], where 𝐓(𝜇𝑗)=𝐓𝜇𝑗𝐈3, 𝑗=1,2,3; see, for example, [3]. Clearly, for each 𝜇𝑗, matrix 𝐓(𝜇𝑗) can be written as 𝐓𝜇𝑗=𝛼𝐓𝐚+̃𝛽𝐈3+𝛾𝐚𝐚,(2.19) where ̃𝛽=𝛽𝜇𝑗. For 𝜇1=𝛽+𝛾𝐚2, we have ̃𝛽=𝛾𝐚2. By virtue of the equivalence ̃𝛽=0𝛾=0, statement (i) of Corollary 2.9 leads to point (iii) of the theorem. If, however, ̃𝛽0, that is, 𝛾0, then ̃𝜆=1+(𝛾/𝛽)𝐚2=0, which means that cases (iii) and (v) of Corollary 2.9, characterized by 𝜆0, are not attainable in the present situation. Further observations are that 𝜏=𝛼2𝐚2+̃𝛽2=(𝛼2+𝛾2𝐚2)𝐚2 and ̃𝛼/𝛽=(𝛼/𝛾)(1/𝐚2). In view of these facts, it is seen that statements (ii) and (iv) of Corollary 2.9 lead to characterizations of (𝜇1) given in point (iv) of the theorem.
Next we consider eigenvalue 𝜇2=𝛽+𝑖𝛼𝐚, which ensures that ̃𝛽 occurring in (2.19) is given by ̃𝛽=𝑖𝛼𝐚. Since ̃𝛽=0 is equivalent to 𝛼=0, on account of statement (i) of Corollary 2.9 we arrive at characterization of (𝜇2) given in point (i) of the theorem. On the other hand, if ̃𝛽0, that is, 𝛼0, then 𝜏 is necessarily equal to zero, which means that cases (iv) and (v) of Corollary 2.9 are to be excluded from the present considerations. Furthermore, it is seen that 𝜆=1+𝑖(𝛾/𝛼)𝐚 and ̃𝛼/𝛽=𝑖/𝐚. With these facts taken into account, we conclude that statements (ii) and (iii) of Corollary 2.9 lead to characterizations of (𝜇2) provided in point (ii) of the theorem.
The last eigenvalue to be considered is 𝜇3=𝛽𝑖𝛼𝐚, for which ̃𝛽=𝑖𝛼𝐚. In this case, analogous arguments to the ones used with respect to 𝜇2 lead to the eigenspaces (𝜇3) in points (i) and (ii) of the theorem. The proof is complete.

Observe that if 𝛼=1, 𝛽=0, and 𝛾=0, then from Theorem 2.11 we get (𝜇1)=(𝐚), (𝜇2)=(𝐐1), and (𝜇3)=(𝐐3), that is, the eigenspaces of 𝐓𝐚 identified by Trenkler [3, Sec. 3].

Theorem 2.11 is supplemented with examples demonstrating its applicability. Let 𝐚=(1,0,1). Then, 𝐓𝐚=010101010,𝐚𝐚=101000101,(2.20) leading to 𝐓=𝛽+𝛾𝛼𝛾𝛼𝛽𝛼𝛾𝛼𝛽+𝛾(2.21) of eigenvalues 𝜇1=𝛽+2𝛾, 𝜇2=𝛽+𝑖2𝛼, 𝜇3=𝛽𝑖2𝛼. From the right-hand side formula in (2.20) we get 𝐏𝐚=1210210002012,(2.22) and the projectors 𝐐𝑘, 𝑘=1,,4, involved in Theorem 2.11 are of the forms 𝐐1=14𝑖2414𝑖2412𝑖2414𝑖2414,𝐐2=34𝑖2414𝑖2412𝑖2414𝑖2434,𝐐3=14𝑖2414𝑖2412𝑖2414𝑖2414,𝐐4=34𝑖2414𝑖2412𝑖2414𝑖2434.(2.23) Hence, from Theorem 2.11 we obtain what follows.

If 𝛼=0, then (𝜇2)=(𝜇3)=3,1 provided that 𝛾=0 and 𝜇2𝜇=3101,010=span(2.24) otherwise.

Next, if 𝛼0, then 𝜇210,01=span1𝑖2(2.25) provided that 𝛾/𝛼=𝑖(2/2) and 𝜇21𝑖=span21(2.26) otherwise and, simultaneously, 𝜇310,01𝑖=span12(2.27) provided that 𝛾/𝛼=𝑖(2/2) and 𝜇31=span𝑖21(2.28) otherwise.

Further, if 𝛾=0, then (𝜇1)=3,1 provided that 𝛼=0 and (𝜇1)=(𝐚) otherwise.

Finally, if 𝛾0, then 𝜇110,01=span1𝑖2(2.29) provided that 𝛼/𝛾=𝑖2, 𝜇110,01𝑖=span12(2.30) provided that 𝛼/𝛾=𝑖2, and (𝜇1)=(𝐚) otherwise.

Further consequences of Theorem 2.11 deal with the proper rotation matrices; for more detailed discussion see [16]. G. Trenkler and D. Trenkler [6, Sec. 3] identified three types of proper rotations, namely, Type I, covering matrices of the form 𝐑=𝐈3; Type II, covering matrices of the form 𝐑=𝟐𝐚𝐚𝐈3, where 𝐚=1; Type III, covering matrices of the form 𝐑=𝐈3+𝑓𝐚(𝐓𝐚+𝐓2𝐚), where 𝑓𝐚=2/(𝐚2+1). Moreover, it was pointed out in [6] that each proper rotation can be attributed to one of these types. Direct calculations show that rotations of Type I are obtained from the representation (1.5) by taking 𝛼=0, 𝛽=1, 𝛾=0; Type II are obtained from the representation (1.5) by taking 𝛼=0, 𝛽=1, 𝛾=2, and 𝐚=1; Type III are obtained from the representation (1.5) by taking 𝛼=𝑓𝐚, 𝛽=1𝑓𝐚𝐚2, 𝛾=𝑓𝐚, where 𝑓𝐚=2/(𝐚2+1). Combining these observations with Corollary 2.6 leads to the conclusion that rotations of Type I have eigenvalues 𝜇1=1, 𝜇2=1, 𝜇3=1; Type II have eigenvalues 𝜇1=1, 𝜇2=1, 𝜇3=1; Type III have eigenvalues 𝜇1=1, 𝜇2=1𝑓𝐚𝐚(𝐚𝑖), 𝜇3=1𝑓𝐚𝐚(𝐚+𝑖). Furthermore, from Theorem 2.11 we obtain the following characterizations of the eigenspaces.

Corollary 2.12. Let 𝐑Υ be a proper rotation, and let (𝜇𝑗), 𝑗=1,2,3, be the eigenspaces of 𝐑 associated with its eigenvalues 𝜇𝑗. Then, (i)if 𝐑 is of Type I, then (𝜇1)=3,1, (𝜇2)=3,1, (𝜇3)=3,1,(ii)if 𝐑 is of Type II, then (𝜇1)=(𝐚), (𝜇2)=(𝐐𝐚), (𝜇3)=(𝐐𝐚),(iii)if 𝐑 is of Type III, then (𝜇1)=(𝐚), (𝜇2)=(𝐐1), (𝜇3)=(𝐐3), where 𝐐𝑘=𝐈3𝐏𝑘, 𝑘=1,3, with 𝐏𝑘 as specified in (2.3).

References

  1. T. G. Room, “The composition of rotations in euclidean three-space,” The American Mathematical Monthly, vol. 59, pp. 688–692, 1952. View at Publisher · View at Google Scholar
  2. G. Trenkler, “Vector equations and their solutions,” International Journal of Mathematical Education in Science and Technology, vol. 29, no. 3, pp. 455–459, 1998. View at Google Scholar · View at Zentralblatt MATH
  3. G. Trenkler, “The vector cross product from an algebraic point of view,” Discussiones Mathematicae. General Algebra and Applications, vol. 21, no. 1, pp. 67–82, 2001. View at Google Scholar · View at Zentralblatt MATH
  4. J. Groß, G. Trenkler, and S.-O. Troschke, “The vector cross product in 3,” International Journal of Mathematical Education in Science and Technology, vol. 30, no. 4, pp. 549–555, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. D. S. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas, Princeton University Press, Princeton, NJ, USA, 2nd edition, 2009.
  6. G. Trenkler and D. Trenkler, “On the product of rotations,” International Journal of Mathematical Education in Science and Technology, vol. 39, no. 1, pp. 94–104, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  7. B. Noble, Applied Linear Algebra, Prentice Hall, Englewood Cliffs, NJ, USA, 1969.
  8. R. N. Murray, Z. X. Li, and S. S. Sastry, A Mathematical Introduction to Robotic Manipulation, CRC Press, Boca Raton, Fla, USA, 1994.
  9. L. Mirsky, An Introduction to Linear Algebra, Dover, New York, NY, USA, 1990.
  10. A. S. Householder, The Theory of Matrices in Numerical Analysis, Dover, New York, NY, USA, 1964.
  11. D. Trenkler and G. Trenkler, “Problem 29–12. Matrices commuting with the vector cross product,” IMAGE—The Bulletin of the International Linear Algebra Society, vol. 29, p. 35, 2002. View at Google Scholar
  12. C. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, Philadelphia, Pa, USA, 2000.
  13. P. Lancaster and M. Tismenetsky, The Theory of Matrices, Computer Science and Applied Mathematics, Academic Press, Orlando, Fla, USA, 2nd edition, 1985.
  14. S. L. Campbell and C. D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, UK, 1979.
  15. J. K. Baksalary, O. M. Baksalary, and G. Trenkler, “A revisitation of fomulae for the Moore-Penrose inverse of modified matrices,” Linear Algebra and Its Applications, vol. 372, pp. 207–224, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  16. O. M. Baksalary and G. Trenkler, “Eigenspaces of the proper rotation matrices,” International Journal of Mathematical Education in Science and Technology, vol. 41, no. 6, pp. 827–829, 2010. View at Publisher · View at Google Scholar