Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2018, Article ID 6598025, 7 pages
https://doi.org/10.1155/2018/6598025
Research Article

Extrinsic Least Squares Regression with Closed-Form Solution on Product Grassmann Manifold for Video-Based Recognition

1Beijing Key Lab of Multimedia and Intelligent Software Technology, Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing 100124, China
2School of Software Technology, Dalian University of Technology, No. 2 Linggong Road, Ganjingzi District, Dalian 116024, China

Correspondence should be addressed to Lichun Wang; nc.ude.tujb@clgnaw

Received 21 August 2017; Accepted 30 January 2018; Published 1 March 2018

Academic Editor: Simone Bianco

Copyright © 2018 Yuping Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Least squares regression is a fundamental tool in statistical analysis and is more effective than some complicated models with small number of training samples. Representing multidimensional data with product Grassmann manifold has recently led to notable results in various visual recognition tasks. This paper proposes extrinsic least squares regression with Projection Metric on product Grassmann manifold by embedding Grassmann manifold into the space of symmetric matrices via an isometric mapping. The proposed regression has closed-form solution which is more accurate compared with numerical solution of previous least squares regression using geodesic distance. Experiments on several recognition tasks show that the proposed method achieves considerable accuracy in comparison with some state-of-the-art methods.

1. Introduction

As an important application of computer vision, video-based recognition such as action recognition [1] attracts more and more attention. For inferring the correct label of a query in a given database of examples, there are mainly two kinds of methods. One kind approach is based on representations with the handcrafted features and the other kind is based on deep learning architectures such as Convolutional Neural Networks (CNN) [2]. Generally speaking, deep learning algorithms have been shown to be successful when large amount of data is available [3, 4]. However, the size of database for many recognition tasks in daily life is small. In this case, deep learning algorithms lose efficacy and it becomes important to analyze the structure of data and represent it with discriminant features.

Nowadays, Grassmann manifold has proven a powerful representation for video-based applications like activity classification [5], action recognition [6], age estimation [7], face recognition [8, 9], and so on. In the above applications, Grassmann manifold is used to characterize the intrinsic geometry of data. Taking one representative work as an example, Lui [10] factorized a data tensor using Higher Order Singular Value Decomposition (HOSVD) and imposed each factorized element on a Grassmann manifold. This representation yields a very discriminating structure for action recognition.

Inference on manifold spaces can be achieved extrinsically by embedding manifold into Euclidean space, which can be considered as flattening the manifold. In the literature, the most popular choice for embedding manifold is through considering tangent spaces [11, 12]. For example, Lui [10] presented a least squares regression on product Grassmann manifold, in which the weighted average from the training samples was computed in tangent space and was projected back to Grassmann manifold by standard logarithmic and exponential map. The distance between points to the tangent pole is equal to geodesic distance, which is restrictive and may lead to inaccurate modeling. An alternate method considers embedding Grassmann manifold into space of symmetric matrices by a diffeomorphism [13] and uses Projection Metric [14] which is equal to the true Grassmann geodesic distance up to a scale of .

In this paper, by representing multidimensional data on product Grassmann manifold with same form as Lui [10], we propose an extrinsic least squares regression on product Grassmann manifold using Projection Metric and give a closed-form solution which is more accurate. Least squares regression as a simple statistical model has many advantages such as simple calculation and being more effective than some complicated models with small number of training samples [15]. We experiment with the proposed method on three kinds of small-scale datasets including hand gesture, Ballet, and traffic; the higher recognition rates reveal that our method is competitive to some state-of-the-art methods.

The rest of this paper is organized as follows: Section 2 introduces mathematical background; Section 3 gives product Grassmann manifold representation for video; Section 4 presents distance on product Grassmann manifold; Section 5 proposes extrinsic least squares regression on product Grassmann manifold; Section 6 gives classification based on extrinsic least squares regression; Section 7 shows experiments on different datasets, and experiment results show that the proposed method achieves considerable accuracy; Section 8 analyzes the time complexity of proposed method and Section 9 gives a conclusion.

2. Mathematical Background

In this section, we introduce the mathematical background used in this paper.

2.1. Grassmann Manifold

Stiefel manifold is the set of all matrices with orthonormal columns; that is, where is the identity matrix. Grassmann manifold can be defined as a quotient manifold of with an equivalence relation . In fact, for any , where is the subspace spanned by columns of . In other words, Grassmann manifold is the space of -dimensional linear subspaces of for [16], which may be specified by arbitrary orthogonal matrix with dimension . Notice it is not unique for the choice of matrix for a point on Grassmann manifold; that is, the same point on Grassmann manifold can be spanned by different matrix and .

2.2. Higher Order Singular Value Decomposition (HOSVD)

HOSVD is a multilinear SVD operating on tensor. Let be a tensor with order . The process of reordering the elements of an -mode tensor into a matrix is called matricization. The mode- matricization of a tensor is denoted by (see details in [17]). Then each is factored using SVD as follows:where is a diagonal matrix, is an orthogonal matrix which spanned the column space of , and is an orthogonal matrix which spanned the row space of . By using HOSVD method, an order tensor can be decomposed as follows: where is core tensor, are orthogonal matrices given in (3), and denotes mode- multiplication.

2.3. Product Manifold

Let be manifolds; the product manifold of the manifolds is defined as where denotes Cartesian product and is called factor manifold.

3. Product Grassmann Manifold Representation for Video

Video is a kind of multidimensional data and can be represented as tensor , where , , and represent height, width, and length of video, respectively. The variation of each mode can be captured by HOSVD. Lui et al. [18] found that traditional HOSVD is not appropriate for forming product manifold, so they redefined the traditional definition of HOSVD to factorize tensor using the orthogonal matrices , , and described in (3). That is, where is core tensor.

Since is a tall orthogonal matrix, hence it is a point on Stiefel manifold. Then is a point on Grassmann manifold. Hence, is a point on product Grassmann manifold. Then is a representation for videos on product Grassmann manifold.

4. Distance on Product Grassmann Manifold

The metric on Grassmann manifold is geodesic distance which is the shortest curve between two -dimensional subspaces and , that is, with representing the principal angles [16]. Recently, Chikuse [13] introduced a projection embedding , , where denotes space of symmetric matrices. And Hamm and Lee [19] defined a distance called Projection Metric on Grassmann manifold as follows.

Definition 1. Given two points and on Grassmann manifold , the distance between and is defined as

Remark 2. In fact, for any matrix , there exists a orthogonal matrix such that , then element is equal to element . In this case, . Hence it is feasible to use the matrix representing . And is equal to geodesic distance of two points on Grassmann manifold [14].

Based on Definition 1, we give a kind of definition of distance on product Grassmann manifold which sums distance of each factor Grassmann manifold.

Definition 3. Given two points and on product Grassmann manifold , the distance between and is defined as

5. Extrinsic Least Squares Regression on Product Grassmann Manifold

Least squares regression is a simple and efficient technique in statistical analysis. In Euclidean space, parameter is estimated by minimizing the residual sum-of-square error where is training set and is regression value. The estimated parameter has closed solution as Hence the corresponding error is

In Grassmann manifold space, Lui [10] extended the linear least squares regression to a nonlinear form. In detail, the estimated parameter is equal to where is a nonlinear similarity operator, is a set of training samples on manifold, and is an element on manifold. So the corresponding error is where is an operator mapping points from vector space back to manifold. While Grassmann manifold is not closed under normal matrix subtraction and addition, the mapping is realized by employing exponential mapping and its inverse without closed-form solution. To realize the composition map , an improved Karcher Mean Computation algorithm is employed. To avoid loss of the above iterative algorithm, we introduce an extrinsic least squares regression on Grassmann manifold by embedding its elements to space of symmetric matrices. Due to the distance on product Grassmann manifold in (8) being additive for each factor, the extrinsic least squares regression on product Grassmann manifold equals three independent subregression problems on each factor. Taking one factor as example, we show the details in the following.

Let be training set where is number of samples, and is fitting parameter. is regression value. Similar to the idea of least squares regression in Euclidean space, we give a regression on Grassmann manifold, which is defined in the embedded space of symmetric matrices. The residual is measured as follows:where is the th element in vector . Next we show how to solve the optimization. We have and we define Hence model (14) becomes Let derivation of (17) with respect to equal to 0; we have So the solution of optimization (14) isHence the corresponding error becomes

6. Recognition Based on Extrinsic Least Squares Regression

In this subsection, we consider 3-order product Grassmann manifold for videos, while the situation for higher order is similar. Suppose classes are defined for the data. We denote training set corresponding with the th class as , where is number of samples. Our objective is inferring to which class the test sample belongs.

The residual error of query sample for class is defined as where are solutions of subregression on each factor Grassmann manifold, respectively. The category of the query sample is determined by

7. Experiments on Different Datasets

In this section, we show performance of the proposed method against some state-of-the-art methods on two kinds of datasets.

7.1. Action Recognition
7.1.1. Cambridge Hand Gesture Dataset

The Cambridge hand gesture dataset [20] contains 900 video sequences with nine kinds of hand gestures, which is divided into 5 sets according to different illuminations. Figure 1 shows some hand gesture samples. Set 5 (normal illumination) is considered for training while the remaining sequences (with different illumination characteristics) are used for testing. The original sequences are converted to grayscale and resized to . We denote our method as ELSR and report the correct recognition rate (CRR) for the four illumination sets in Table 1. Compared with product manifold (PM) [10], Grassmann Sparse Coding (gSC) [14], Grassmann Locality-Constrained Coding (gLC) [14], kernel Grassmann Sparse Coding (kgSC) [14], and kernel Grassmann Locality-Constrained Coding (kgLC) [14], we find that our method is competitive to these state-of-the-art methods.

Table 1: Recognition results on the Cambridge hand gesture dataset.
Figure 1: Cambridge hand gesture samples: (a) from left to right (flat-leftward; flat-rightward; flat-contract; spread-leftward; spread-rightward); (b) from left to right (spread-contract; V-shape-leftward; V-shape-rightward; V-shape-contract).
7.1.2. Ballet Dataset

44 videos are collected from a Ballet instruction DVD as the Ballet dataset [21]. In fact, 8 complex motion patterns from 3 persons are included in the dataset. In detail, the actions are “right-to-left hand opening,” “left-to-right hand opening,” “standing hand opening,” “jumping,” “leg swinging,” “hopping,” “turning,” and “standing still”. The main challenge of this dataset is large variations among classes such as speed, clothing, and motion paths. Figure 2 shows some examples of the dataset. Table 2 shows ELSR has superior performance compared with gSC-dic, gLC-dic, kgSC-dic, and kgLC-dic [14].

Table 2: Correct recognition rate on the Ballet dataset.
Figure 2: Examples from the Ballet dataset.
7.2. Scene Analysis

For scene analysis, we use the UCSD traffic dataset [22] which contains 254 videos of highway traffic under different weather conditions. Resolution is and number of frames ranges from 42 to 52. The dataset is divided into three classes (“heavy,” “medium,” and “light”) according to traffic congestion level. In total, there are 44 sequences defined as heavy traffic, 45 sequences labeled as medium traffic, and 165 sequences are light traffic. Figure 3 shows some typical examples. In experiment, we use the first 40 frames of each video and they are normalized as grayscale with resolution 48 × 48. We adopt the four pairs of training and testing sets provided in paper [23]. The classification results are shown in Table 3; the average correct recognition rate of ELSR is higher than that of gSC and gLC but lower than kgSC and kgLC.

Table 3: Average correct recognition rate on the UCSD video traffic dataset.
Figure 3: Examples from the USCD dataset: “light”; “medium”; “heavy.
7.3. Discussion

Through above experiments, we conclude that the proposed method is more effective for action recognition than scene analysis. In fact, the product Grassmann manifold could capture the appearance, horizontal motion, and vertical motion through three factor manifolds. To visualize the product manifold representation, the overlay appearance, horizontal motion, and vertical motion of examples from three dataset are given in Figure 4. Note that there are obvious variation features along horizontal motion for hand gesture examples, both horizontal and vertical motion for Ballet examples. These curves in last two columns characterize the motion and are the key factors for recognizing. This can be seen as an explanation of the higher CRR result of ELSR on Ballet dataset. Meanwhile, for samples from UCSD, horizontal and vertical motion features are not clear because of all cars running along the same path, and the critical factor is appearance, characterizing the number of cars. Hence for UCSD dataset, the CRR of ELSR is just little higher than gSC and gLC, but lower than kgSC and kgLC which maps to higher-dimensional manifolds using kernel function to diminish nonlinearity.

Figure 4: Examples of the overlay appearance, vertical and horizontal motion on three datasets.

8. Performance Analysis

We analyze time complexity of inferring the label for a query with given training samples . The main computing steps include (19) and (22). We take one factor manifold for example. Some terms only related to such as can be computed offline. For computing , the time complexity of computing one element of is ; then the complexity of vector is , and hence the complexity of solving solution is . Computing the error needs . Therefore the whole time complexity of our approach is . For small-scale dataset, our proposed method is effective and the time complexity will not be too large. For example in experiment of Cambridge hand gesture dataset: , , , , , , and .

9. Conclusion

In this paper, we propose extrinsic least squares regression on product Grassmann manifold. Video can be viewed as third order tensor and then transformed to point on product Grassmann manifold factorized through HOSVD. One advantage of this method is the regression has closed-form solution which guides to a more accurate ratio of correct recognition. And when number of training samples is small, the proposed method is efficient. Several experiments on different recognition tasks (hand gesture recognition, action recognition, and scene analysis) show that our method performs very well on three small-scale public datasets.

In future work, we would like to devise kernel version of extrinsic least squares regression on product manifold.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (nos. 61390510, 61632006, and 61772049), the Beijing Natural Science Foundation (no. 4162009), Funding Project for Academic Human Resources Development in Institutions of Higher Learning under the Jurisdiction of Beijing Municipality and Jing-Hua Talents Project of Beijing University of Technology, and Funding Project of Beijing Municipal Human Resources and Social Security Bureau (no. 2017-ZZ-031).

References

  1. S. Herath, M. Harandi, and F. Porikli, “Going deeper into action recognition: A survey,” Image and Vision Computing, vol. 60, pp. 4–21, 2017. View at Publisher · View at Google Scholar · View at Scopus
  2. C. Ding and D. Tao, “Trunk-branch ensemble convolutional neural networks for video-based face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PP, no. 99, 2017. View at Publisher · View at Google Scholar
  3. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '16), pp. 770–778, Las Vegas, Nev, USA, June 2016. View at Publisher · View at Google Scholar
  4. R. K. Srivastava, K. Greff, and J. Schmidhuber, “Training very deep networks,” in Proceedings of the 29th Annual Conference on Neural Information Processing Systems, NIPS 2015, pp. 2377–2385, can, December 2015. View at Scopus
  5. P. Turaga and R. Chellappa, “Locally time-invariant models of human activities using trajectories on the grassmannian,” in Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009, pp. 2435–2441, usa, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. Y. M. Lui, “Tangent bundles on special manifolds for action recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 6, pp. 930–942, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. P. Turaga, S. Biswas, and R. Chellappa, “The role of geometry in age estimation,” in Proceedings of the 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010, pp. 946–949, USA, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. Y. M. Lui and J. R. Beveridge, “Grassmann Registration Manifolds for Face Recognition,” in Computer Vision – ECCV 2008, vol. 5303 of Lecture Notes in Computer Science, pp. 44–57, Springer Berlin Heidelberg, Berlin, Heidelberg, 2008. View at Publisher · View at Google Scholar
  9. Z. Huang, R. Wang, S. Shan, and X. Chen, “Projection Metric Learning on Grassmann Manifold with Application to Video based Face Recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, pp. 140–149, USA, June 2015. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. M. Lui, “Human gesture recognition on product manifolds,” Journal of Machine Learning Research (JMLR), vol. 13, pp. 3297–3321, 2012. View at Google Scholar · View at MathSciNet
  11. O. Tuzel, F. Porikli, and P. Meer, “Pedestrian detection via classification on Riemannian manifolds,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 10, pp. 1713–1727, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. M. Faraki, M. T. Harandi, A. Wiliem, and B. C. Lovell, “Fisher tensors for classifying human epithelial cells,” Pattern Recognition, vol. 47, no. 7, pp. 2348–2359, 2014. View at Publisher · View at Google Scholar · View at Scopus
  13. Y. Chikuse, Statistics on special manifolds, vol. 174 of Lecture Notes in Statistics, Springer-Verlag, New York, 2003. View at MathSciNet
  14. M. Harandi, R. Hartley, C. Shen, B. Lovell, and C. Sanderson, “Extrinsic methods for coding and dictionary learning on Grassmann manifolds,” International Journal of Computer Vision, vol. 114, no. 2-3, pp. 113–136, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, Springer, New York, NY, USA, 2001. View at Publisher · View at Google Scholar · View at MathSciNet
  16. P.-A. Absil, R. Mahony, and R. Sepulchre, Optimization algorithms on matrix manifolds, Princeton University Press, Princeton, NJ, 2008. View at MathSciNet
  17. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  18. Y. M. Lui, J. R. Beveridge, and M. Kirby, “Action classification on product manifolds,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 833–839, San Francisco, Calif, USA, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Hamm and D. D. Lee, “Grassmann discriminant analysis,” in Proceedings of the the 25th international conference, pp. 376–383, Helsinki, Finland, July 2008. View at Publisher · View at Google Scholar
  20. T.-K. Kim and R. Cipolla, “Canonical correlation analysis of video volume tensors for action categorization and detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 8, pp. 1415–1428, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. Y. Wang and G. Mori, “Human action recognition by semilatent topic models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 10, pp. 1762–1774, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. A. B. Chan and N. Vasconcelos, “Probabilistic kernels for the classification of auto-regressive visual processes,” in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, pp. 846–851, USA, June 2005. View at Publisher · View at Google Scholar · View at Scopus
  23. A. B. Chan and N. Vasconcelos, “Classification and retrieval of traffic video using auto-regressive stochastic processes,” in Proceedings of the 2005 IEEE Intelligent Vehicles Symposium, pp. 771–776, USA, June 2005. View at Publisher · View at Google Scholar · View at Scopus