- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

The Scientific World Journal

Volume 2013 (2013), Article ID 292787, 9 pages

http://dx.doi.org/10.1155/2013/292787

## Riemannian Means on Special Euclidean Group and Unipotent Matrices Group

^{1}School of Mathematics, Beijing Institute of Technology, Beijing 100081, China^{2}Department of Mathematics, University of Surrey, Guildford, Surrey GU2 7XH, UK

Received 1 August 2013; Accepted 16 September 2013

Academic Editors: R. Abu-Saris and P. Bracken

Copyright © 2013 Xiaomin Duan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Among the noncompact matrix Lie groups, the special Euclidean group and the unipotent matrix group play important roles in both theoretic and applied studies. The Riemannian means of a finite set of the given points on the two matrix groups are investigated, respectively. Based on the left invariant metric on the matrix Lie groups, the geodesic between any two points is gotten. And the sum of the geodesic distances is taken as the cost function, whose minimizer is the Riemannian mean. Moreover, a Riemannian gradient algorithm for computing the Riemannian mean on the special Euclidean group and an iterative formula for that on the unipotent matrix group are proposed, respectively. Finally, several numerical simulations in the 3-dimensional case are given to illustrate our results.

#### 1. Introduction

A matrix Lie group, which is also a differentiable manifold simultaneously, attracts more and more researchers’ attention from both theoretic interest and its applications [1–5]. The Riemannian mean on the matrix Lie groups is widely studied for its varied applications in biomedicine, signal processing, and robotics control [6–9]. Fiori and Tanaka [10] suggested a general-purpose algorithm to compute the average element of a finite set of matrices belonging to any matrix Lie group. In [11], the author investigated the Riemannian mean on the compact Lie groups and proposed a globally convergent Riemannian gradient descent algorithm. Different invariant notions of mean and average rotations on (it is compact) are given in [9]. Recently, Fiori [12] dealt with computing averages over the group of real symplectic matrices, which found applications in diverse areas such as optics and particle physics.

However, the Riemannian mean on the special Euclidean group and the unipotent matrix group , which are the noncompact matrix Lie groups, has not been well studied. Fletcher et al. [6] proposed an iterative algorithm to obtain the approximate solution of the Riemannian mean on by use of the Baker-Cambell-Hausdorff formula. In [7], the exponential mapping from the arithmetic mean of points on the Lie algebra to the Lie group was constructed to give the Riemannian mean in order to get a mean filter.

In this paper, the Riemannian means on and those on , which are both important noncompact matrix Lie groups [13, 14], are considered, respectively. Especially, is the spacial rigid body motion, and is the 3-dimensional Heisenberg group . Based on the left invariant metric on the matrix Lie groups, we get the geodesic distance between any two points and take their sum as a cost function. And the Riemannian mean will minimize it. Furthermore, the Riemannian mean on is gotten using the Riemannian gradient algorithm, rather than the approximate mean. An iterative formula for computing the Riemannian mean on is proposed according to the Jacobi field. Finally, we give some numerical simulations on and those on to illustrate our results.

#### 2. Overview of Matrix Lie Groups

In this section, we briefly introduce the Riemannian framework of the matrix Lie groups [15, 16], which forms the foundation of our study of the Riemannian mean on them.

##### 2.1. The Riemannian Structures of Matrix Lie Groups

A group is called a Lie group if it has differentiable structure: the group operators, that is, and , are differentiable, . A matrix Lie group is a Lie group with all elements matrices. The tangent space of at identity is the Lie algebra , where the Lie bracket is defined.

The exponential map, denoted by , is a map from the Lie algebra to the group . Generally, the exponential map is neither surjective nor injective. Nevertheless, it is a diffeomorphism between a neighborhood of the identity on and a neighborhood of the identity on . The (local) inverse of the exponential map is the logarithmic map, denoted by .

The most general matrix Lie group is the general linear group consisting of the invertible matrices with real entries. As the inverse image of under the continuous map , is an open subset of the set of real matrices, denoted by , which is isomorphic to , it has a differentiable manifold structure (submanifold). The group multiplication of is the usual matrix multiplication, the inverse map takes a matrix on to its inverse , and the identity element is the identity matrix . The Lie algebra of turns out to be with the Lie bracket defined by the matrix commutator

All other real matrix Lie groups are subgroups of , and their group operators are subgroup restrictions of the ones on . The Lie bracket on their Lie algebras is still the matrix commutator.

Let denote a matrix Lie group and its Lie algebra. The exponential map for turns out to be just the matrix exponential; that is, given an element , the exponential map is The inverse map, that is, the logarithmic map, is defined as follows: for in a neighborhood of the identity of . The exponential of a matrix plays a crucial role in the theory of the Lie groups, which can be used to obtain the Lie algebra of a matrix Lie group, and it transfers information from the Lie algebra to the Lie group.

The matrix Lie group also has the structure of a Riemannian manifold. For any and , the tangent space of at , we have the maps that where denotes the left translation, denotes the right translation, and and are the tangent mappings associated with and , respectively. The adjoint action is

It is also easy to see the formula that Then, the left invariant metric on is given by with and tr denoting the trace of the matrix. Similarly, we can define the right invariant metric on as well. It has been shown that there exist the left invariant metrics on all matrix Lie groups.

##### 2.2. Compact Matrix Lie Group

A Lie group is compact if its differential structure is compact. The unitary group , the special unitary group , the orthogonal group , the special orthogonal group , and the symplectic group are the examples of the compact matrix Lie groups [17]. Denote a compact Lie group by and its Lie algebra by . There exists an adjoint invariant metric on such that with . Notice the fact that the left invariant metric of any adjoint invariant metric is also right invariant; namely, it is a bi-invariant metric; so all compact Lie groups have bi-invariant metrics. Furthermore, if the left invariant and the adjoint invariant metrics on deduce a Riemannian connection , then the following properties are valid: where is a curvature operator about the smooth tangent vector field on the Riemannian manifold . Therefore, the section curvature is given by which means that is nonnegative on the compact Lie group.

In addition, according to the Hopf-Rinow theorem, a compact connected Lie group is geodesically complete. It means that, for any given two points, there exists a geodesic curve connecting them and the geodesic curve can extend infinitely.

##### 2.3. The Riemannian Mean on Matrix Lie Group

Let be a sufficiently smooth curve on . We define the length of by where denotes the transpose of the matrix. The geodesic distance between two matrices and on considered as a differentiable manifold is the infimum of the lengths of the curves connecting them; that is,

According to the Euclidean analogue (mean on Euclidean space), a definition of the mean of matrices is the minimizer of the sum of the squared distances from any matrix to the given matrices on . Now, we define the Riemannian mean based on the geodesic distance (12).

*Definition 1. *The mean of given matrices on in the Riemannian sense corresponding to the metric (7) is defined as

#### 3. The Riemannian Mean on

In this section, we discuss the Riemannian mean on the special Euclidean group , which is a subgroup of . Moreover, the special rigid body motion group is taken as an illustrating example.

##### 3.1. About SE

The special Euclidean group in is the semidirect product of the special orthogonal group with itself [18]; that is, The matrix representation of elements of is An element of physically represents a displacement, where corresponds to the orientation, or attitude, of the rigid body and encodes the translation. The Lie algebra of can be denoted by

Specially, when , the skew-symmetric matrix can be uniquely expressed as with . gives the amount of rotation with respect to the unit vector along , where denotes the Frobenius norm. Physically, represents the angular velocity of the rigid body, whereas corresponds to the linear velocity [19]. In [18], the author presents a closed-form expression of the exponential map by with and . Note that it can be regarded as an extension of the well-known Rodrigues formula on . The logarithmic map is yielded as where , for .

##### 3.2. Algorithm for the Riemannian Mean on SE

Denote by Taking the corresponding exponential mappings on manifolds and into consideration, the geodesic between and on the Lie group is given by where and are the geodesics expressed, respectively, by Then, the midpoint of and is defined by

Before the geodesic distance on is given, we first introduce a lemma which is a known conclusion in linear algebra [20].

Lemma 2. *If and are invertible matrices, then the block matrix
**
is invertible, where . Furthermore,
*

Now, we give the geodesic distance on as follows.

Lemma 3. *The geodesic distance between two points and on induced by the scale-dependent left invariant metric (7) is given by
*

*Proof. *As mentioned above, the geodesic distance between two matrices and on is achieved by the length of geodesics connecting them; thus, we will compute it through substituting (22) into (11).

From Lemma 2, we get
Then, according to the principle about the derivatives of the matrix-valued functions, the following formula is valid:
Moreover, we have that

Therefore, the geodesic distance on between and is given by

This completes the proof of Lemma 2.

In addition, it is valuable to mention that the distance , induced by the standard bi-invariant metric on , stands for the rotation motion from the point to and the distance stands for the translation motion on . Therefore, considering an object undergoing a rigid body Euclidean motion, then, this motion can be decomposed into a rotation with respect to the center of mass of the object and a translation of the center of mass of the object.

Theorem 4. *For given points on **
where and, if the Riemannian mean of and the Riemannian mean of (i.e., arithmetic mean) are denoted by and , respectively, then, one has the Riemannian mean of by
*

*Proof. *In the Riemannian sense, by (13), the mean is defined as
From [9], the geodesic distance between and on is given by
so we have that
On the other hand, for , it is easy to see that
Therefore, the fact is shown that the Riemannian mean of is equivalent to the arithmetic mean.

Consequently, we prove that equality (33) is valid.

In addition, let denote the cost function of the minimization problem (34) on ; that is, where and stand for the rotation and the translation components of the cost function , respectively. We have the gradient of for as follows [21]: Consequently, the Riemannian gradient descent algorithm is applied to calculate , taking the geodesic on as the trajectory and the negative gradient (39) as the descent direction.

Finally, we achieve the following algorithm for computing the Riemannian mean on .

*Algorithm 5. *Given matrices , on , their Riemannian mean is computed by the following iterative method.(1)Store to .(2)Set as an initial input, and choose a desired tolerance .(3)If , then stop.(4)Otherwise, update , and go to step ().

##### 3.3. Simulations on SE

Let us consider a rigid object in the Euclidean space undergoing a rigid body Euclidean motion . Suppose that the coordinate of the center of gravity in is ; then, the optimal trajectory from the configuration to is the curve such that where and denotes the geodesic connecting and on (see Figure 1). For the configuration of two points and , as shown in Figure 2, given by the angular velocity , of the rigid body and the linear velocity , we choose ,, and ; then, we obtain their Riemannian mean according to Algorithm 5, which is just the middle point from (24).

#### 4. The Riemannian Mean on

In this section, the Riemannian mean of given points on the unipotent matrix group is considered. is a noncompact matrix Lie group as well. Moreover, in the special case , it is the Heisenberg group .

##### 4.1. About UP

The set of all of the uppertriangular matrices with diagonal elements that are all one is called unipotent matrices group, denoted by .

In fact, given an invertible matrix , there is a neighborhood of such that every matrix in is also in , so is an open subset of . Furthermore, the matrix product is clearly a smooth function of the entries of and , and is a smooth function of the entries of . Thus, is a Lie group. On the other hand, it can be verified that is of dimension and is nilpotent. Since we can use the nonzero elements ,, directly as global coordinate functions for , the manifold underlying is diffeomorphic to . Therefore, is not compact, but simply connected.

The Lie algebra of consists of uppertriangular matrices with diagonal elements . It is an indispensable tool which gives a realization of the Heisenberg commutation relations of quantum mechanics in the 3-dimensional case [17].

Moreover, it is the fact that both and are all nilpotent matrices, for any and . Thus, from (2) and (3), the infinite series representations of the exponential mapping in and the logarithm mapping in can be given, respectively, by where , and with .

Notice that is connected, which means that, for any given pair , we can find a geodesic curve such that and , namely, by taking the initial velocity as . Let the geodesic curve be with , and . Then, the midpoint of and is given by and from (11) the geodesic distance can be computed explicitly by

##### 4.2. Algorithm for the Riemannian Mean on UP

For given points in , denotes the cost function of the minimization problem (13); that is, Following [22, 23], it has been shown that the Jacobi field is equal to zero at the Riemannian mean. The Jacobi field for the Riemannian mean is equal to the sum of tangent vectors to all geodesics (from mean to each point). Noticing the fact that the geodesic between two points and has already been given by (43), we can then compute the Jacobi field at point to points (at ) such that Then, we suppose that the summation of all these vectors should be equal to zero; that is, so the Riemannian mean of the matrices should satisfy From the logarithm of the matrices on given by (41), we can rewrite (49) as Therefore, the Riemannian mean of the given matrices can be given explicitly by solving (50).

For the case of , from (50), it is shown that the Riemannian mean of given matrices in is their arithmetic mean; that is,

Next, for , we obtain the Riemannian mean on () as follows.

Theorem 6. *Given matrices on the Heisenberg group by
**
where , then, one has the Riemannian mean on the Heisenberg group such that
**
where ,, and . *

*Proof. *First, let us denote the Riemannian mean by
Then, note that, for the given matrices on , their Riemannian mean has to satisfy (50), so we get the following solutions:
As a matter of convenience, supposing that , , and , we show that (54) is valid.

This completes the proof of Theorem 6.

More generally, while , we can get the Riemannian mean on given by the following theorem.

Theorem 7. *Take . For given matrices in, one assumes that they are in the form of
**
with and ; then, the Riemannian mean of the matrices is given by
**
where is the Riemannian mean of and is given by the formula that
*

*Proof. *For simplicity of exposition, we suppose that the Riemannian mean is the block matrix in the form of
with and . Since the Riemannian mean of the matrices should satisfy (50), we substitute the block matrix forms (59) and (57) into (50). Then, we obtain the following matrix equation for the Riemannian mean :
which means that (58) is valid and satisfies the equation
Moreover, from (41), we have that
Furthermore, it is shown that is the Riemannian mean of . At last, we write as , so the proof of Theorem 7 is completed.

As shown above, we give the iterative formula for computing the Riemannian mean for any dimension . Either (51) or (54) can be chosen as the initial formula.

##### 4.3. Simulations on

In this section, we take two examples to illustrate the results about the Riemannian mean on the Heisenberg group , which is the 3-dimensional space.

*Example 8. *Consider the Riemannian mean of three points ,, on the Heisenberg group . Using (43), we can get the geodesics of three points on , which form a geodesic triangle. In Figure 3, all of the curves are geodesics. Moreover, as shown in Figure 4, the midpoint of each geodesic is easy to be obtained by (44). Thus, each centerline connects a vertex to the midpoint of its opposing side. On , these centerlines always meet in a single point which is coincident with the Riemannian mean computed by (54), denoted by a red dot as shown in Figure 4.

*Example 9. *Given four points ,,, on the Heisenberg group , we can get a geodesic tetrahedron from (43) (see Figure 5), where all curves are geodesics. Moreover, similar to Example 8, the Riemannian means of three vertexes on each curved face are obtained, denoted by red circles (see Figure 6). Then, we plot each centerline which connects a vertex to the Riemannian mean of its opposing side. It is shown that these centerlines still meet in a single point, denoted by a red pentacle. In fact, the point is the Riemannian mean of ,,, applying (54).

#### 5. Conclusion

In this paper, we consider the Riemannian means on the special Euclidean group and the unipotent matrix group , respectively. Based on the left invariant metric on the matrix Lie groups, we get the geodesic distance between any two points and take their sum as a cost function. Furthermore, we get the Riemannian mean on SE using the Riemannian gradient algorithm. Moreover, we give an iterative formula for computing the Riemannian mean on UP according to its Jacobi field. Finally, we make advantages of several numerical simulations on SE and to illustrate our results.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper. This paper is supported by the National Natural Science Foundation of China (nos. 61179031 and 10932002).

#### References

- G. Cigler and R. Drnovšek, “From local to global similarity of matrix groups,”
*Linear Algebra and Its Applications*, vol. 435, no. 6, pp. 1285–1295, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. L. Silvestre and T. Takahashi, “On the motion of a rigid body with a cavity filled with a viscous liquid,”
*Proceedings of the Royal Society of Edinburgh A*, vol. 142, no. 2, pp. 391–423, 2012. View at Publisher · View at Google Scholar · View at Scopus - H. A. Ardakani and T. J. Bridges, “Shallow-water sloshing in vessels undergoing prescribed rigid-body motion in three dimensions,”
*Journal of Fluid Mechanics*, vol. 667, pp. 474–519, 2011. View at Publisher · View at Google Scholar · View at Scopus - F. Li, K. Wang, J. Guo, and J. Ma, “Suborbits of a point stabilizer in the orthogonal group on the last subconstituent of orthogonal dual polar graphs,”
*Linear Algebra and Its Applications*, vol. 436, no. 5, pp. 1297–1311, 2012. View at Publisher · View at Google Scholar · View at Scopus - K. Zhu, “Invariance of Fock spaces under the action of the Heisenberg group,”
*Bulletin des Sciences Mathematiques*, vol. 135, no. 5, pp. 467–474, 2011. View at Publisher · View at Google Scholar · View at Scopus - P. T. Fletcher, C. Lu, and S. Joshi, “Statistics of shape via principal geodesic analysis on Lie groups,” in
*Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '03)*, vol. 1, pp. 95–101, June 2003. View at Scopus - L. Merckel and T. Nishida, “Change-point detection on the Lie group SE(3),” in
*Computer Vision, Imaging and Computer Graphics. Theory and Applications*, vol. 229 of*Communications in Computer and Information Science*, pp. 230–245, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar - K. O. Johnson, R. K. Robison, and J. G. Pipe, “Rigid body motion compensation for spiral projection imaging,”
*IEEE Transactions on Medical Imaging*, vol. 30, no. 3, pp. 655–665, 2011. View at Publisher · View at Google Scholar · View at Scopus - M. Moakher, “Means and averaging in the group of rotations,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 24, no. 1, pp. 1–16, 2002. View at Publisher · View at Google Scholar · View at Scopus - S. Fiori and T. Tanaka, “An algorithm to compute averages on matrix Lie groups,”
*IEEE Transactions on Signal Processing*, vol. 57, no. 12, pp. 4734–4743, 2009. View at Publisher · View at Google Scholar · View at Scopus - J. H. Manton, “A globally convergent numerical algorithm for computing the centre of mass on compact Lie groups,” in
*Proceedings of the 8th International Conference on Control, Automation, Robotics and Vision (ICARCV '04)*, vol. 3, pp. 2211–2216, Kunming, China, December 2004. View at Scopus - S. Fiori, “Solving minimal-distance problems over the manifold of real-symplectic matrices,”
*SIAM Journal on Matrix Analysis and Applications*, vol. 32, no. 3, pp. 938–968, 2011. View at Publisher · View at Google Scholar · View at Scopus - E. Marberg, “Superclasses and supercharacters of normal pattern subgroups of the unipotent upper triangular matrix group,”
*Journal of Algebraic Combinatorics*, vol. 35, no. 1, pp. 61–92, 2012. View at Publisher · View at Google Scholar · View at Scopus - N. Thiem, “Branching rules in the ring of superclass functions of unipotent upper-triangular matrices,”
*Journal of Algebraic Combinatorics*, vol. 31, no. 2, pp. 267–298, 2010. View at Publisher · View at Google Scholar · View at Scopus - A. Kirillov,
*An Introduction to Lie Groups and Lie Algebras*, Cambridge University Press, Cambridge, UK, 2008. - W. Chen and X. Li,
*An Introduction to Riemannian Geometry*, Peking University Press, Beijing, China, 2002. - B. C. Hall,
*Lie Groups, Lie Algebras, and Representations: An Elementary Introduction*, Springer, New York, NY, USA, 2003. - J. M. Selig,
*Geometric Fundamentals of Robotics*, Springer, New York, NY, USA, 2nd edition, 2005. - M. Žefran, V. Kumar, and C. B. Croke, “On the generation of smooth three-dimensional rigid body motions,”
*IEEE Transactions on Robotics and Automation*, vol. 14, no. 4, pp. 576–589, 1998. View at Publisher · View at Google Scholar · View at Scopus - X. Zhang,
*Matrix Analysis and Applications*, Tsinghua University Press, Beijing, China, 2004. - H. Karcher, “Riemannian center of mass and mollifier smoothing,”
*Communications on Pure and Applied Mathematics*, vol. 30, no. 5, pp. 509–541, 1977. - M. Arnaudon and X. Li, “Barycenters of measures transported by stochastic flows,”
*Annals of Probability*, vol. 33, no. 4, pp. 1509–1543, 2005. View at Publisher · View at Google Scholar · View at Scopus - F. Barbaresco, “Interactions between symmetric cones and information geometrics: bruhat-tits and siegel spaces models for high resolution autoregressive doppler imagery,” in
*Emerging Trends in Visual Computing*, vol. 5416 of*Lecture Notes in Computer Science*, pp. 124–163, Springer, Berlin, Germany, 2009. View at Publisher · View at Google Scholar · View at Scopus