Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2016, Article ID 1760172, 7 pages
Research Article

Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

Li Yao1,2

1Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, Jiangsu Province, China
2State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu Province, China

Received 29 March 2016; Revised 14 July 2016; Accepted 7 August 2016

Academic Editor: Hong Man

Copyright © 2016 Li Yao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a -way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.