Abstract

Human action recognition is an important recent challenging task. Projecting depth images onto three depth motion maps (DMMs) and extracting deep convolutional neural network (DCNN) features are discriminant descriptor features to characterize the spatiotemporal information of a specific action from a sequence of depth images. In this paper, a unified improved collaborative representation framework is proposed in which the probability that a test sample belongs to the collaborative subspace of all classes can be well defined and calculated. The improved collaborative representation classifier (ICRC) based on -regularized for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then theoretical investigation into ICRC shows that it obtains a final classification by computing the likelihood for each class. Coupled with the DMMs and DCNN features, experiments on depth image-based action recognition, including MSRAction3D and MSRGesture3D datasets, demonstrate that the proposed approach successfully using a distance-based representation classifier achieves superior performance over the state-of-the-art methods, including SRC, CRC, and SVM.

1. Introduction

Human action recognition has been studied in the computer vision community for decades, due to its applications in video surveillance [1], human computer interaction [2], and motion analysis [3]. Prior to the Microsoft Kinect, the conventional research focused on human action recognition from RGB, but Kinect sensors provide an affordable technology to capture RGB and depth (D) images in real time, which can offer better geometric cues and less sensitivity to illumination changes for action recognition. In [1], a bag of 3D points and graphical model are obtained to characterize spatial and temporal information from depth images. In [3], three depth motion maps (DMMs) are projected to capture body shape and motion, which is a discriminant feature to describe the spatiotemporal information of a specific action from a sequence of depth images. Seen from the literature review, although depth based methods appear to be compelling toward a practical application, even if there are a few of deep-learned features for depth based action recognition, the performance is still far from satisfactory due to the large variations of the motion. In this paper, we focus on leveraging one kind of the structure of representative model to improve performance in multiclass classification with handcrafted DMMs descriptor. In [4], three channel deep convolutional neural networks are trained to extract features of depth map sequences after projecting weighted DMMs on three orthogonal planes at several temporal scales. It was verified that the method using DCNN features can achieve almost the same state-of-the-art results on the MSRAction3D and MSRGesture3D dataset. DCNNs have been demonstrated as an effective kind of models for performing state-of-the-art results in the tasks of image recognition, segmentation, detection, and retrieval. With the success of DCNN, we also take it as feature extraction and apply it in our classifier model.

As for representative models, many achievements based on space representation include image restoration [5], compressive sensing [6, 7], morphological component analysis [8], and super-resolution [9, 10]. As the advances of classifiers based on representation, several pattern recognition problems in the field of computer vision can be effectively solved by sparse coding or sparse representation methods in recent decades. In particular, the linear models can be represented as [11], where , , and represent the data, a sparse vector, and a given matrix with the overcomplete sample set, respectively. Because of the great success of sparse coding algorithms on image processing, the sparse representation based classifiers, such as sparse representation classification (SRC) and collaborative representation classification (CRC), have gained more attention nowadays.

The basic idea of SRC/CRC is to code the test sample over a set of samples with sparsity constraints, which can be calculated by -minimization. In [12], Wright proposed a basic SRC model for classification by the discriminative nature of sparse representation, which is based on the theory that newly signals are recognized as the linear combinations of previously observed ones. Based on SRC, Yang and Zhang proposed a Gabor occlusion dictionary based SRC, which can significantly reduce the computational cost [13]. In [14], the authors combined sparse representation with linear pyramid matching for image classification. Rather than using the entire training set, Zhang and Li [15] proposed learned dictionary using SRC. In [16], -graph is constructed by a sparse representation subspace over the other samples. Yang et al. [14] also proposed a method to preserve the -graph for image classification by using a subspace to solve misalignment problems in image classification task. Besides, SRC is used for robust illumination [17], image-plane transformation [18], and so on. However, Zhang argued that the good performance of SRC should be largely attributed to the collaborative representation of a test sample by training samples across all classes and proposed more effective CRC. In summary, SRC/CRC simply uses the reconstruction error or residual by each class-specific subspace to determine the class label, and many modified models and solution algorithms to SRC/CRC are also proposed for visual recognition tasks, including Augmented Lagrange Multiplier, Proximal Gradient, Gradient Projection, Iterative Shrinkage-Thresholding, and Homotopy [19]. Recently, some researchers [20, 21] have pointed out the purpose of -regularized based sparsity in pattern classification. On the contrary, using -regularized based representation for classification can do a similar job to -regularized but the computational cost will reduce a lot.

Motivated by the work of modifications of CRC, in this paper, we mainly present the improved collaborative representation classifier (ICRC) based on -regularized for human action recognition. Based on three DMMs’ descriptor feature, the ICRC approach is to jointly maximize the likelihood that a test sample belongs to each of the multiple classes, then the final classification is performed by computing the likelihood for each class. The experiments on human action classification tasks, including MSRAction3D and MSRGesture3D datasets, are demonstrated and analyzed on the superior performance of this algorithm over the state-of-the-art methods, including SRC, CRC, and SVM. The rest of the paper is organized as follows. In Section 2, we introduce related feature descriptors using DMM. Section 3 details the action classifier based on ICRC, and Section 4 shows the experimental results of our approach on relevant datasets. The conclusion and acknowledgment are drawn in Section 5 and Acknowledgments section.

2. Feature Descriptors

2.1. Using Depth Motion Maps

In this section, we explain the extracted feature descriptor using depth motion maps (DMMs) from depth images, which is generated by selecting and stacking motion energy of depth maps projected onto three orthogonal Cartesian planes, aligning with front (), side (), and top () views (i.e., , , and , resp.). As for each projected map, its motion energy is computed by thresholding the difference between consecutive maps. The binary map of motion energy provides a strong clue of the action category being performed and indicates motion regions or where movement happens in each temporal interval. We suggest that all frames should be deployed to calculate motion information instead of selecting frames. Considering both the discriminability and robustness of feature descriptors, we use the -norm of the absolute difference of a frame to define the salient information on depth sequences. Because -norm is invariant to the length of a depth sequence, and -norm contains more salient information than other norms (e.g., ), we havewhere is the frame interval, i represents the frame index, and N is the total number of frames in a depth sequence. In the case that the sum operation in (1) is only used given a threshold satisfied, the scale of affects little the local pattern histogram on the DMMs.

2.2. Using Deep Convolutional Neural Networks

In this section, we introduce three deep convolutional neural networks (DCNNs) to train the features on three projected planes of DMMs and perform fusion of three nets by combining the softmax in fully connected layer. The layer configuration of our three CNNs is schematically shown in Figure 1, in which there are five convolutional layers and three fully connected layers in each net. The detail of our implementation is illustrated in Section 4.1.2.

3. Action Classifier Based on ICRC

Based on depth motion maps, to incorporate the feature descriptors into a powerful classifier, an improved collaborative representation classifier (ICRC) is presented for human action recognition.

3.1. -Regularized Collaborative Representation Classifier

The basic idea of SRC is to get a test sample by sparsely choosing a small number of atoms from an overcomplete dictionary that contain all training samples [12]. Denoted by , the set of training samples form class , and suppose we have class of subjects. So involves many samples from all classes, and is the individual class of training samples, is the total number of training samples, and is the dimension of training samples. A query sample can be presented by , where , , and represent the data, a sparse vector, and a given matrix with the overcomplete training samples, respectively.

To be specific, in the mechanism of collaborative representation classifier (CRC), each data point in the collaborative subspace can be represented as a linear combination of samples in , where is an representation vector associated with training sample and is the subvector corresponding to . Generally, it is formulized as a -norm minimization problem with a convex objective and solved bywhere is a positive scalar to balance the sparsity term and the residual. The residual can be computed aswhere is the coefficient vector corresponding to class . And then the output of the identity of can be obtained by the lowest residual as

For more details of SRC/CRC, one can refer to [12]. Because of the computational time consuming in -regularized minimization, (1) is approximated aswhere is representative by and if the -norm of is smaller. In (5), is the Tikhonov regularization [27] to calculate the coefficient vector, and is the regularization parameter. is the -regularization term to add a certain amount of sparsity to , which is weaker than -norm minimization. The diagonal matrix and the coefficient vector are calculated as follows [21]: where is independent of and precalculated. With (3) and (4), the data is assigned different identities based on .

3.2. The Proposed ICRC Method

Based on the training sample set, we propose an improved collaborative representation classifier based on -regularized term, which assigns the data points with different probabilities based on by adding a term that attempts to find a point close to the common point inside each subspace of class . The first two terms still form a -regularized collaborative representation term, which encourages to find a point close to in the collaborative subspace. Therefore, (5) is rewritten asObviously, the parameters and balance three terms, which can be set from the training data. Accordingly, a new solution of representative vector is obtained from (7).

In the condition of , (7) will degenerate to CRC with the first two terms, and will play an important role in determining . When , these two terms will be the same for all classes, and thus the term will be dominant to further fine-tune by yielding to a precise . That is, the last newly added term is introduced to further adjust by , resulting in a more stable solution to representative vector .

We can omit the first two same terms for all classes, make the classifier rule by the last term, and formulize it as a probability exponent:The proposed -regularized method for human action recognition is presented to maximize the likelihood that a test sample belongs to each class, then the experiments in the following section show that it obtains a final classification by checking which class has the maximum likelihood. So far, the abovementioned classifier model in (7) and (8) is named as the improved collaboration representation classifier (ICRC).

4. Experimental Results

Based on depth motion maps, to incorporate the feature descriptors into a powerful classifier, an ICRC is presented for human action recognition. To verify the effectiveness of the proposed ICRC algorithm on action recognition applications using DMM descriptors of depth sequences, we carry out experiments on challenging depth based action datasets MSRAction3D [1] and MSRGesture3D [1] for human action recognition.

4.1. Feature Descriptors
4.1.1. DMMs

The MSRAction3D [1] dataset is composed of depth images sequence captured by the Microsoft Kinect-V1 camera. It includes 20 actions performed by 10 subjects facing the camera. Each subject performed each action 2 or 3 times. There are 20 action types: high arm wave, horizontal arm wave, hammer, hand catch, forward punch, high throw, draw X, draw tick, draw circle, hand clap, two-hand wave, side boxing, bend, forward kick, side kick, jogging, tennis swing, tennis serve, golf swing, and pick up and throw. The size of each depth image is 240 × 320 pixels. The background information has been removed in the depth data.

The MSRGesture3D [1] dataset is for continuous online human action recognition from a Kinect device. It consists of 12 gestures defined by American Sign Language (ASL). Each person performs each gesture 2 or 3 times. There are 333 depth sequences. For action recognition on the MSRAction3D and MSRGesture3D dataset, we use the feature computed from the DMMs, and each depth action sequence generates three DMMs corresponding to three projection views. The DMMs of high arm wave class from the MSRAction dataset are shown in Figure 2, and the DMMs of ASL Z class from the MSRGesture3D dataset are shown in Figure 3.

4.1.2. DCNNs

Furthermore, our implementation of DCNN features is based on the publicly available MatConvNet toolbox [28] using one Nvidia Titan X card. The network weights are learned by mini-batch stochastic gradient descent. Similar to [4], the momentum is set to 0.9 and weight decay is set to 0.0005, and all hidden weight layers use the rectification activation function. At each iteration, 256 samples in each batch are constructed and resized to 256 × 256, then 224 × 224 patches are randomly cropped from the center of the selected image to artificial data augmentation. The dropout regularization ratio is 0.5 in the nets. Besides, the initial learning rate is set to 0.01 with pretrained model on ILSVRC-2012 to fine-tune our model, and the learning rate decreases every 20 epochs. Finally, we concatenate three 4096 dimensional feature vectors in 7th fully connected layer to input the subsequent classifier.

4.2. Experiment Setting

The same experimental setup in [1] was adopted, and the actions in MSRAction3D dataset were divided into three subsets as follows: AS1: horizontal wave, hammer, forward punch, high throw, hand clap, bend, tennis serve, and pickup throw; AS2: high wave, hand catch, draw x, draw tick, draw circle, two-hand wave, forward kick, and side boxing; AS3: high throw, forward kick, side kick, jogging, tennis swing, tennis serve, golf swing, and pickup throw. We performed three experiments with 2/3 training samples and 1/3 testing samples in AS1, AS2, and AS3, respectively. Thus, the performance on MSRAction3D is evaluated by the average accuracy (Accu., unit: %) on three subsets. On the other hand, the same experimental setting reported in [26, 29, 30] was followed. 12 gestures were tested by leave-one-subject-out cross-validation to evaluate the performance of the proposed method.

4.3. Recognition Results with DMMs and ICRC

We concatenate the sign, magnitude, and center features to form the feature based on DMMs as the final feature representation. The compared methods are similar to [29, 30]. The same parameters reported in [26] were used here for the sizes of SI and block. A total of 20 actions are employed and one-half of the subjects (1, 3, 5, 7, and 9) are used for training and the remaining subjects are used for testing. The recognition performance of our method and existing approaches are listed in Table 1. It is clear that our method achieves better performance than other competing methods.

To show the outcome of our method, Figures 4 and 5 illustrate the recognition rates of each class in two datasets. It is stated that there are 14 classes obtaining 100% recognition rates in the MSRAction3D dataset, and the performance of 3 classes reaches up to best in the MSRGesture3D dataset. All experiments are carried out using MATLAB 2016b on an Intel i7-6500U desktop with 8 GB RAM, and the average time of video processing gets about 26 frames per second, meeting a real-time processing demand basically.

4.4. Comparison with DCNN Features and ICRC

Furthermore, in order to evaluate our proposed classifier method, we also extract the deep features by the abovementioned conventional CNN model and then input the 12288 dimensional vectors to the proposed ICRC for action recognition. Table 2 shows that DCNN algorithm indeed has advances as good as in other popular tasks of image classification and object detection, and it can improve the accuracy greatly up to 6% in MSRAction3D and MSRGesture3D. This would also explain the importance of effective feature to ICRC classifier.

5. Conclusion

In this paper, we propose improved collaborative representation classifier (ICRC) based on -regularized for human action recognition. The DMMs and DCNN feature descriptors are involved as an effective action representation. For the action classifier, ICRC is proposed based on collaborative representation with the additional regularization term. The new insight focuses on a subspace constraints on the solution. The experimental results on MSRAction3D and MSRGesture3D show that the proposed algorithm performs favorably against the state-of-the-art methods, including SRC, CRC, and SVM. Future work will focus on involving the deep-learned network in the depth image representation and evaluating more complex datasets such as MSR3DActivity, UTKinect-Action, and NTU RGB+D, for the action recognition task.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The work was supported by the State Key Laboratory of Coal Resources and Safe Mining under Contracts SKLCRSM16KFD04 and SKLCRSM16KFD03, in part by the Natural Science Foundation of China under Contract 61601466, and in part by the Fundamental Research Funds for the Central Universities under Contract 2016QJ04.