Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2015, Article ID 140820, 9 pages
http://dx.doi.org/10.1155/2015/140820
Research Article

Human Activity Recognition Based on the Hierarchical Feature Selection and Classification Framework

Department of Physics, Guangdong University of Education, Guangzhou 510303, China

Received 31 January 2015; Revised 17 April 2015; Accepted 19 May 2015

Academic Editor: Sos Agaian

Copyright © 2015 Yuhuang Zheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Human activity recognition via triaxial accelerometers can provide valuable information for evaluating functional abilities. In this paper, we present an accelerometer sensor-based approach for human activity recognition. Our proposed recognition method used a hierarchical scheme, where the recognition of ten activity classes was divided into five distinct classification problems. Every classifier used the Least Squares Support Vector Machine (LS-SVM) and Naive Bayes (NB) algorithm to distinguish different activity classes. The activity class was recognized based on the mean, variance, entropy of magnitude, and angle of triaxial accelerometer signal features. Our proposed activity recognition method recognized ten activities with an average accuracy of 95.6% using only a single triaxial accelerometer.

1. Introduction

Recently, activity recognition has become an emerging field of research and one of the challenges for pervasive computing. A typical application for activity recognition is in health care. Activity recognition is also an important research issue in building a pervasive and smart environment to provide personalized support.

Computer vision-based techniques and body-fixed accelerators are the main methodologies used for activity recognition. Computer vision-based techniques for activity recognition should be conducted in a well-controlled environment and be subject to the limitations of the environment. However, they may significantly fail in an environment with clutter and variable lighting [13]. Body-fixed accelerators offer a practical and relatively low-cost method to measure human motion.

The existing literature demonstrates many studies on activity recognition that use accelerometers. However, there are three primary challenges in these studies.

(1) The large muscles of the body are controlled for walking, running, sitting, and other activities. The glutes are the primary muscles that drive lower-body movement because of their natural strength and leverage advantage on the legs. Lower-body movement includes activities such as running, jumping, and walking. Sleeping, sitting, standing, walking, running, and jumping must be recognized as typical physical activities. The activity recognition algorithm in Khan et al. [4] did not consider jumping. Running and jumping were excluded from the experiments in the research of Trabelsi et al. [5], Tang and Sazonov [6], Lee et al. [7], and Deng et al. [8]. Gupta and Dallas [9] did not report how to recognize standing and sleeping, and Tao et al. [10] did not describe tests for recognizing sitting and sleeping. Alshurafa et al. [11] studied only walking and running recognition. These studies were incomplete in recognizing typical physical activities [12].

(2) Some studies [6, 9, 13, 14] required the combination of multiple sensors to increase recognition performance. However, a user is less likely to wear a more complex operating system at all times. People may not feel comfortable wearing multiple sensors. Nevertheless, the multisensor systems do not have an enormous advantage over the single-sensor system on the recognition accuracy if the single-sensor system uses a higher sampling rate, suitable features, a more sophisticated classifier, and the correct sensor position, which has the best performance for recognizing activities. A single sensor mounted at the right position can also obtain good recognition performance. For typical physical activities, multiple sensors are not helpful for significantly improving recognition performance [1517].

(3) A series of lectures [1820] have been given on the topic of recognizing so-called ADL (activities of daily living), which is not physical-activity recognition. “Activities of daily living” is a term used in healthcare to refer to daily self-care activities, such as cooking and hair drying, within an individual’s place of residence or in outdoor environments. Physical activity included any body movement that works the muscles and requires more energy than resting, and it simply implies a movement of the body that uses energy, such as running or walking [2123]. Physical-activity recognition is discussed in this paper.

Many researchers have used particular devices to collect the raw accelerometer data for a set of movements and various activity recognition algorithms including Artificial Neural Networks (ANN) [4, 7, 13], -Nearest Neighbor (NN) [8, 10, 11, 19], Support Vector Machines (SVM) [6, 14, 18], and Hidden Markov Model (HMM) [5, 20]. In our study, we addressed the activity recognition algorithm using SVM for three reasons.

(1) SVM and ANN have been broadly used in human activity recognition, although they do not include a set of rules understandable by humans [24]. As two different algorithms, SVM and ANN share the same concept of using the linear learning model for pattern recognition. The difference is mainly on how nonlinear data are classified. Consequently, SVM models have preferable prediction performances to ANN models. SVMs have been demonstrated to have superior classification accuracies to neural classifiers in many experiments. The generalization performance of neural classifiers considers the structure size, and the selection of an appropriate structure relies on cross validation [25]. The performance of SVMs depends on the selection of kernel function type and parameters, but this dependence is less effective [26].

(2) NN does not perform well when the size of dataset increases, and it is suitable for small datasets. SVM is a complicated classifier; here, we implement the leaner kernel function. We conclude that the accuracy and other performance criteria do not significantly depend on the dataset size, but they depend on the number of training cycles among all factors. The number of training cycles is the best classifier for activity recognition [27].

(3) When a continuous HMM approach to activities is used, the length of the event sequence that gives the best predictions uses sequential data. A HMM is used to model the sequential information in multiaspect target signatures. The parameter-learning task in HMMs is to determine the best set of state transition and emission probabilities given an output sequence or a set of such sequences. The task is usually to derive the maximum likelihood estimate of the parameters of the HMM for the set of output sequences. Typical physical activities are nonsequential, and it is not easy to use HMM to recognize a single physical activity [28].

The traditional SVM [29] is formulated for binary nonlinear classification problems. How to effectively extend the SVM for multiclassification remains a hot topic. The Least Squares Support Vector Machine (LS-SVM) is an advanced version of the standard SVM, and LS-SVM defines a different cost function from the classical SVM and changes its inequation restriction to an equation restriction. Recently, there have been relatively few studies that use LS-SVM to recognize activities using a triaxial accelerometer. Nasiri et al. [30] addressed the Energy-Based Least Square Twin Support Vector Machine (ELS-TSVM) algorithm, which is an extended LS-SVM classifier that performs classification using two nonparallel hyper planes instead of a single hyper plane, which is used in the conventional SVM. ELS-TSVM was used to recognize activities using computer vision instead of a triaxial accelerometer. Altun et al. [31] compared the performances of the least squares method (LSM) and the SVM but did not include the LS-SVM. The LS-SVM for multiclassification is decomposed into multiple binary classification tasks. The LS-SVM for multiclassification reduces the computational complexity by using a small number of classifiers and effectively eliminates the unclassifiable regions that possibly affect the classification performance of this algorithm [3234].

In this paper, we aimed to overcome the limitations of the existing physical-activity recognition system and intended to develop a new method that could recognize a set of typical physical activities using only a single triaxial accelerometer. This method consisted of three parts, six features for activity recognition, the hierarchical recognition scheme, and the activity estimator based on the LS-SVM and NB algorithms. This method could recognize ten physical activities with a high recognition rate.

The remainder of the paper is organized as follows. Section 2 describes the experimental dataset and hierarchical classification framework in this paper. Section 3 involves feature extraction to improve the classification accuracy using feature data over raw sensor data. Section 4 focuses on an activity estimator for multiclassification to estimate the human activity from the feature data. The experimental results and conclusion are presented in Sections 5 and 6, respectively.

2. Hierarchical Classification Framework

2.1. Activities Dataset

For this work, the used dataset was the University of Southern California Human Activity Dataset (USC-HAD). The USC-HAD was specifically designed to include the most basic and common human activities in daily life from a large and diverse group of human subjects. The activities in the dataset were applicable to many scenarios. The activity data were captured using a high-performance inertial sensing device, which is MotionNode [35]. MotionNode integrates a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer, and the measurement range for each axis of the accelerometer and gyroscope is ±6 g and ±500 dps, respectively. MotionNode was firmly attached onto the participant’s right front hip. The sampling rates of this dataset for both accelerometer and gyroscope were set to 100 Hz. The dataset included 10 activities: walking (forward, left, and right), walking (upstairs, downstairs), jumping, running, standing, sitting, and sleeping [3638].

The main goal of this paper was to identify ten activities, which were divided into four groups: 2D walking (walking forward, left, and right), 3D walking (walking upstairs, downstairs), plane motion (jumping, running), and static activities (standing, sitting, and sleeping). The division was performed using a single triaxial accelerometer. The activities are listed in Table 1.

Table 1: Classified states and activities recognized in this study.
2.2. Hierarchical Classification Framework

To achieve higher scalability than the single-layer framework, a multilayer classification framework was presented. In the first layer, because the walking-related activities (walking forward, walking left, walking right, walking upstairs, and walking downstairs), jumping, running, and static activities were differentiated from one another, we classified the activities into two subsets (walking and all static activities) and two activities (jumping and running) based on feature selection. In the second layer, the walking-related activities subset included plane motion and 3D motion. In this layer, the static activity subset could be classified by standing, sitting, and sleeping. In the third layer, all detailed activities of 2D and 3D walking were recognized [39, 40].

Figure 1 illustrates the structure of the hierarchical classification framework. The yellow boxes represent the activity set, and the green boxes represent the ten types of activities to recognize. Now, the problem of recognizing ten activity classes was broken down to distinct classification problems, and the red boxes represent the classifiers. A preliminary investigation of selection is reported in Table 2. The four-class classifier was the best selection in this hierarchical classification framework because of the small number of classifiers and high average accuracy rate of each classifier. The four-class classifier was used in this paper.

Table 2: A preliminary investigation of selection.
Figure 1: Structure of the hierarchical classification framework.

In the hierarchical classification framework of the four-class classifier, classifier 1, at the top layer, distinguishes walking-related activities, jumping, running, and static activities. Walking-related activities include walking forward, walking left, walking right, walking upstairs, and walking downstairs. Static activities include standing, sitting, and sleeping [37]. Classifier 2, at the second layer, distinguishes plane motions and 3D motions. Classifier 3 recognizes activities from plane motion, and classifier 4 distinguishes walking upstairs and downstairs from 3D motions. Finally, classifier 5 focuses on recognizing different static activities.

3. Feature Design and Selection

Recent related work in feature selection was performed in a filter-based approach using Relief-F and a wrapper-based approach using a variant of sequential forward floating search. Because different features were on different scales, all features were normalized to obtain the best results for NN or Naive Bayes classifiers, which were used for error estimation and ensure equal weight to all potential features [16, 810, 13, 18, 24, 29].

In our approach, according to the elementary mechanics of walking, running, jumping, and sleeping, we used the means and variances of magnitudes and angles as the activity features and the magnitudes and angles that were produced by a triaxial acceleration vector. The reasons for this approach are as follows. First, according to [4143], the muscles produce different forces when people walk, run, jump, and sleep. Normally, the forces increase in the order of sleeping, walking, running, and jumping. Based on Newton’s second law, the resultant accelerations of these activities also increase in that order. Second, as in [44], a model of persistent 2D random walks can be represented by drawing turning angles. Detailed features are described below. Third, Shannon entropy in the time domain can measure the acceleration signal uncertainty and describe the information-related properties for an accurate representation of a given acceleration signal.

The triaxial acceleration vector is where , , and represent the acceleration sample of the , , and axes. This feature is independent of the orientation of the sensing device and measures the instantaneous intensity of human movements at index .

We computed the mean, variance, and entropy of magnitude and of the angle of over the window and used them as six features: , , , , , and , where is the window length. is the angle between vectors and , as shown in the following. Let ; then

To explore the performance and correlation among these six features, a series of scatter plots in a 2D feature space is shown in Figure 2. The horizontal and vertical axes represent two different features. The points in different colors represent different activities. In Figure 2(a), the relationship between and is described, and the running, jumping, walking, and static activities are clustered. In Figure 2(b), the straight line between 2D walking (forward, left, and right) and 3D walking (upstairs and downstairs) implies that is an available feature. Figure 2(c) illustrates that the and features successfully partition the triaxial acceleration data samples from walking forward, walking left, and walking right into three isolated clusters, where each cluster contains data samples roughly from one single activity class. Figure 2(d) demonstrates the discrimination power of the and features to differentiate walking upstairs and walking downstairs. Figure 2(e) shows that the triaxial acceleration signal can be classified into standing, sitting, and sleeping based on the and features.

Figure 2: Scatter plots in the 2D feature space ().

In this study, we used , , , , , and as the best features for the classifiers in each layer [45].

4. Activity Estimation for Multiclassification

We presented an activity estimator for multiclassification to estimate the human activity from the feature data. Each activity estimator for the multiclassification included one LS-SVM classifier and a maximum Act_Label frequency estimator (Figure 3).

Figure 3: Activity estimator for multiclassification.

We used the LS-SVM [34] method to cluster the feature data. After loading the testing data into Matlab, we built an activity-recognizing model from the data. After the parameters of the model were calculated, we estimated the activity by inputting some test feature data [46]. The function trainlssvm() was used to train the support features of an LS-SVM for classification, and the function simlssvm() was used to evaluate the LS-SVM for some test feature data.

Because , , , , , and have () elements, the LS-SVM for the multiclassifier outputs an activity set, which includes elements of Act_Label. The activity set may have different Act_Labels, and we must estimate the Act_Label maximum likelihood in this activity set. We used the Naive Bayes algorithm to compute all Act_Label likelihoods and obtained the human activity using the maximum Act_Label likelihood. The following described how to mathematically compute the maximum Act_Label likelihood:

Figure 4 shows the activity estimator working process, which includes the training stage and testing stage (online activity recognition). In the training stage, the labeled data of triaxial acceleration were normalized and the statistical features were extracted from those synthesized-acceleration data. Then, the multiclassification estimator was used to build the classification model. In the testing stage, unlabeled raw data of the triaxial accelerometer were processed with the method that was used in the training stage. These synthesized data were classified using the multiclassification estimator, and the recognized result was obtained [47, 48].

Figure 4: Activity estimator working process.

5. Experiment

The activity recognition dataset was the USC Human Activity Dataset. The activity dataset included ten activities and collected data from 14 subjects. To capture the day-to-day activity variations, each subject was asked to perform 5 trials for each activity on different days at various indoor and outdoor locations. Although the duration of each trial varies for different activities, it was sufficiently long to capture all information for each performed activity [37]. In this section, we estimated the performances of the five activity classifiers in this activity recognition scheme. Table 3 shows the results of five activity recognition classifiers. These activity classifiers had over 95% accuracy [24] and were acceptable.

Table 3: Activity classifier accuracy test.

The results of these folds are summarized in Table 4. The average recognition accuracy of 95.6% indicates that our proposed human activity recognition scheme can achieve high recognition rates for a specific subject. Because 2D walking and 3D walking are similar, the recognition accuracy of the five walking activities is low. We will attempt to obtain higher recognition accuracy using an adequate amount of training data in future research.

Table 4: Confusion matrix for average recognition accuracy for all activities.

We compared the accuracy rate and running time for common multiclassification methods. All algorithms were run on a computer with CPU i7-2670QM 2.2 G, 8 G ram, and Matlab 2013a. The LS-SVM performed notably well in the tests. The average running time for the hierarchical classification framework with the LS-SVM recognizing activities was 0.021 seconds, which was less than the ANN (Artificial Neural Network), DT (Decision Tree), and NN (-Nearest Neighbor) algorithms. We performed the ANN, DT, and NN classifier tests with the built-in functions of Matlab. The LS-SVM method was also better than ANN, DT, and NN in terms of the average recognition accuracy rate for the ten activities. Table 5 shows the results.

Table 5: Accuracy rates and running times of the classification methods.

6. Conclusion and Future Work

This paper aims to provide an accurate and robust human activity recognition scheme. The scheme used triaxial acceleration data, a hierarchical recognition scheme, and activity classifiers based on the LS-SVM and the NB algorithm. The mean, variance, entropy of magnitude, and angle of triaxial acceleration data were used as the features of the activity classifiers. The scheme effectively recognized a typical set of daily physical activities with an average accuracy of 95.6%. It could distinguish walking (forward, left, right, upstairs, and downstairs), running, jumping, standing, sitting, and sleeping activities using only a single triaxial accelerometer. The experimental results of the hierarchical recognition scheme show significant potential in its ability to accurately differentiate activities using triaxial acceleration data. Although the scheme remains to be tested with USC-HAD datasets, the core of this scheme is independent of the features of other activity datasets; therefore, it is applicable to any dataset.

The novelty of the proposed human activity recognition scheme is the introduction of the LS-SVM method as the classifier algorithm. The LS-SVM is an advanced version of the standard SVM, and there are recently relatively few studies using LS-SVM to recognize activities with only one triaxial accelerometer. The human activity recognition scheme with LS-SVM classifiers simplifies the construction of the hierarchical classification framework and has a lower running time than other common multiclassification algorithms. Accuracy is the basic element that must be considered when any activity recognition system is implemented, and this recognition scheme has a high success rate, for which it can recognize ten different types of activities with an average accuracy of 95%.

The next stage of our research has two parts. First, the algorithms are improved to recognize these activities, and the user will not have to worry about placing the sensors at the correct positions to correctly detect the activities. Second, an unsupervised approach for automatic activity recognition is considered. An unsupervised learning framework of human activity recognition will automatically cluster a large amount of unlabeled acceleration data into discrete groups of activity, which implies that the human activity recognition can be naturally performed.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was partially supported by Appropriative Researching Fund for Professors and Doctors, Guangdong University of Education, under Grant 11ARF04, and Guangdong Provincial Department of Education under Grants 2013LYM_0063 and 2014GXJK161.

References

  1. J. K. Aggarwal and L. Xia, “Human activity recognition from 3D data: a review,” Pattern Recognition Letters, vol. 48, pp. 70–80, 2014. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Hernández, R. Cabido, A. S. Montemayor, and J. J. Pantrigo, “Human activity recognition based on kinematic features,” Expert Systems, vol. 31, no. 4, pp. 345–353, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Yin, G. Tian, Z. Feng, and J. Li, “Human activity recognition based on multiple order temporal information,” Computers & Electrical Engineering, vol. 40, no. 5, pp. 1538–1551, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. A. M. Khan, Y.-K. Lee, S. Y. Lee, and T.-S. Kim, “A triaxial accelerometer-based physical-activity recognition via augmented-signal features and a hierarchical recognizer,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, no. 5, pp. 1166–1172, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. D. Trabelsi, S. Mohammed, F. Chamroukhi, L. Oukhellou, and Y. Amirat, “An unsupervised approach for automatic activity recognition based on hidden markov model regression,” IEEE Transactions on Automation Science and Engineering, vol. 10, no. 3, pp. 829–835, 2013. View at Publisher · View at Google Scholar · View at Scopus
  6. W. L. Tang and E. S. Sazonov, “Highly accurate recognition of human postures and activities through classification with rejection,” IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 1, pp. 309–315, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. M.-W. Lee, A. M. Khan, and T.-S. Kim, “A single tri-axial accelerometer-based real-time personal life log system capable of human activity recognition and exercise information generation,” Personal & Ubiquitous Computing, vol. 15, no. 8, pp. 887–898, 2011. View at Publisher · View at Google Scholar · View at Scopus
  8. W.-Y. Deng, Q.-H. Zheng, and Z.-M. Wang, “Cross-person activity recognition using reduced kernel extreme learning machine,” Neural Networks, vol. 53, pp. 1–7, 2014. View at Publisher · View at Google Scholar · View at Scopus
  9. P. Gupta and T. Dallas, “Feature selection and activity recognition system using a single triaxial accelerometer,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 6, pp. 1780–1786, 2014. View at Publisher · View at Google Scholar · View at Scopus
  10. D. P. Tao, L. Jin, Y. Wang, and X. Li, “Rank preserving discriminant analysis for human behavior recognition on wireless sensor networks,” IEEE Transactions on Industrial Informatics, vol. 10, no. 1, pp. 813–823, 2014. View at Publisher · View at Google Scholar · View at Scopus
  11. N. Alshurafa, W. Xu, J. J. Liu et al., “Designing a robust activity recognition framework for health and exergaming using wearable sensors,” IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 5, pp. 1636–1646, 2014. View at Publisher · View at Google Scholar
  12. A. Chang, S. Mota, and H. Lieberman, “GestureNet: a common sense approach to physical activity similarity,” in Proceedings of the Conference on Electronic Visualisation and the Arts, London, UK, July 2014.
  13. O. Banos, M. Damas, H. Pomares, F. Rojas, B. Delgado-Marquez, and O. Valenzuela, “Human activity recognition based on a sensor weighting hierarchical classifier,” Soft Computing, vol. 17, no. 2, pp. 333–343, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Cheng, X. Chen, and M. Shen, “A framework for daily activity monitoring and fall detection based on surface electromyography and accelerometer signals,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 1, pp. 38–45, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. N. Kern, B. Schiele, and A. Schmidt, “Multi-sensor activity context detection for wearable computing,” in Ambient Intelligence, vol. 2875 of Lecture Notes in Computer Science, pp. 220–232, Springer, Berlin, Germany, 2003. View at Publisher · View at Google Scholar
  16. L. Gao, A. K. Bourke, and J. Nelson, “Sensor positioning for activity recognition using multiple accelerometer-based sensors,” in Proceedings of the 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, pp. 425–430, April 2013. View at Scopus
  17. L. Gao, A. K. Bourke, and J. Nelson, “Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems,” Medical Engineering and Physics, vol. 36, no. 6, pp. 779–785, 2014. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Liu, R. X. Gao, D. John, J. W. Staudenmayer, and P. S. Freedson, “Multisensor data fusion for physical activity assessment,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 3, pp. 687–696, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. J. Wan, M. J. O’Grady, and G. M. P. O’Hare, “Dynamic sensor event segmentation for real-time activity recognition in a smart home context,” Personal & Ubiquitous Computing, vol. 19, no. 2, pp. 287–301, 2015. View at Publisher · View at Google Scholar · View at Scopus
  20. Y. Zhan and T. Kuroda, “Wearable sensor-based human activity recognition from environmental background sounds,” Journal of Ambient Intelligence & Humanized Computing, vol. 5, no. 1, pp. 77–89, 2014. View at Publisher · View at Google Scholar · View at Scopus
  21. http://en.wikipedia.org/wiki/Physical_exercise.
  22. http://www.nhlbi.nih.gov/health/health-topics/topics/phys.
  23. http://en.wikipedia.org/wiki/Activities_of_daily_living.
  24. Ó. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,” IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 1192–1209, 2013. View at Publisher · View at Google Scholar · View at Scopus
  25. S. Arora, D. Bhattacharjee, M. Nasipuri, L. Malik, M. Kundu, and D. K. Basu, “Performance comparison of SVM and ANN for handwritten devnagari character recognition,” International Journal of Computer Science Issues, vol. 18, pp. 63–72, 2010. View at Google Scholar
  26. J. Ren, “ANN vs. SVM: which one performs better in classification of MCCs in mammogram imaging,” Knowledge-Based Systems, vol. 26, pp. 144–153, 2012. View at Publisher · View at Google Scholar · View at Scopus
  27. J. S. Raikwal and K. Saxena, “Performance evaluation of SVM and k-nearest neighbor algorithm over medical data set,” International Journal of Computer Applications, vol. 50, no. 14, pp. 12–24, 2012. View at Publisher · View at Google Scholar
  28. M. Eastwood and B. Gabrys, “A non-sequential representation of sequential data for churn prediction,” in Knowledge-Based and Intelligent Information and Engineering Systems, pp. 209–218, Springer, Berlin, Germany, 2009. View at Google Scholar
  29. Y. Nam and J. W. Park, “Child activity recognition based on cooperative fusion model of a triaxial accelerometer and a barometric pressure sensor,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 2, pp. 420–426, 2013. View at Publisher · View at Google Scholar · View at Scopus
  30. J. A. Nasiri, N. Moghadam Charkari, and K. Mozafari, “Energy-based model of least squares twin Support Vector Machines for human action recognition,” Signal Processing, vol. 104, pp. 248–257, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. K. Altun, B. Barshan, and O. Tunçel, “Comparative study on classifying human activities with miniature inertial and magnetic sensors,” Pattern Recognition, vol. 43, no. 10, pp. 3605–3620, 2010. View at Publisher · View at Google Scholar · View at Scopus
  32. R. Wang, S. Kwong, D. Chen, and J. Cao, “A vector-valued support vector machine model for multiclass problem,” Information Sciences, vol. 235, pp. 174–194, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. N. Zhang and C. Williams, “Water quantity prediction using least squares support vector machines (LSSVM) method,” Journal of Systemics, Cybernetics and Informatics, vol. 2, no. 4, pp. 53–58, 2014. View at Google Scholar
  34. K. D. Brabanter and P. Karsmakers, “LS-SVMlab Toolbox User's Guide,” 2011, http://www.esat.kuleuven.be/sista/lssvmlab/downloads/tutorialv1_8.pdf.
  35. http://www.motionnode.com/.
  36. M. Zhang and A. A. Sawchuk, “A feature selection-based framework for human activity recognition using wearable multimodal sensors,” in Proceedings of the International Conference on Body Area Networks (BodyNets '11), Beijing, China, November 2011.
  37. M. Zhang and A. A. Sawchuk, “USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors,” in Proceedings of the ACM International Conference on Ubiquitous Computing (UbiComp '12), International Workshop on Situation, Activity and Goal Awareness, Pittsburgh, Pa, USA, September 2012.
  38. http://sipi.usc.edu/HAD/.
  39. O. D. Incel, M. Kose, and C. Ersoy, “A review and taxonomy of activity recognition on mobile phones,” BioNanoScience, vol. 3, no. 2, pp. 145–171, 2013. View at Publisher · View at Google Scholar · View at Scopus
  40. Y. Liang, X. Zhou, Z. Yu, and B. Guo, “Energy-efficient motion related activity recognition on mobile devices for pervasive healthcare,” Mobile Networks and Applications, vol. 19, no. 3, pp. 303–317, 2014. View at Publisher · View at Google Scholar · View at Scopus
  41. R. Cross, “Standing, walking, running, and jumping on a force plate,” American Journal of Physics, vol. 67, no. 4, pp. 304–309, 1999. View at Publisher · View at Google Scholar · View at Scopus
  42. A. L. Hof, J. P. Van Zandwijk, and M. F. Bobbert, “Mechanics of human triceps surae muscle in walking, running and jumping,” Acta Physiologica Scandinavica, vol. 174, no. 1, pp. 17–30, 2002. View at Publisher · View at Google Scholar · View at Scopus
  43. G. Cola, A. Vecchio, and M. Avvenuti, “Improving the performance of fall detection systems through walk recognition,” Journal of Ambient Intelligence & Humanized Computing, vol. 5, no. 6, pp. 843–855, 2014. View at Publisher · View at Google Scholar
  44. H.-I. Wu, B.-L. Li, T. A. Springer, and W. H. Neill, “Modelling animal movement as a persistent random walk in two dimensions: expected magnitude of net displacement,” Ecological Modelling, vol. 132, no. 1-2, pp. 115–124, 2000. View at Publisher · View at Google Scholar · View at Scopus
  45. C. Li, M. Lin, L. T. Yang, and C. Ding, “Integrating the enriched feature with machine learning algorithms for human movement and fall detection,” Journal of Supercomputing, vol. 67, no. 3, pp. 854–865, 2014. View at Publisher · View at Google Scholar · View at Scopus
  46. N. Zhang, C. Williams, E. Ososanya, and W. Mahmoud, “Streamflow Prediction Based on Least Squares Support Vector Machines,” 2013, http://www.asee.org/documents/sections/middle-atlantic/fall-2013/11-ASEE2013_Final%20Zhang.pdf.
  47. D. Rodriguez-Martin, A. Samà, C. Perez-Lopez, A. Català, J. Cabestany, and A. Rodriguez-Molinero, “SVM-based posture identification with a single waist-located triaxial accelerometer,” Expert Systems with Applications, vol. 40, no. 18, pp. 7203–7211, 2013. View at Publisher · View at Google Scholar · View at Scopus
  48. J. P. Varkey, D. Pompili, and T. A. Walls, “Human motion recognition using a wireless sensor-based wearable system,” Personal & Ubiquitous Computing, vol. 16, no. 7, pp. 897–910, 2012. View at Publisher · View at Google Scholar · View at Scopus