Table of Contents Author Guidelines Submit a Manuscript
Computational and Mathematical Methods in Medicine
Volume 2016, Article ID 4073584, 17 pages
http://dx.doi.org/10.1155/2016/4073584
Research Article

Human Activity Recognition in AAL Environments Using Random Projections

1Department of Software Engineering, Kaunas University of Technology, LT-51368 Kaunas, Lithuania
2Institute of Mathematics, Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland

Received 8 February 2016; Revised 29 April 2016; Accepted 19 May 2016

Academic Editor: Ezequiel López-Rubio

Copyright © 2016 Robertas Damaševičius et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject’s body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented.

1. Introduction

The societies in the developed countries are rapidly aging. In 2006, almost 500 million people worldwide were 65 years of age or older. By 2030, that total number of aged people is projected to increase to 1 billion. The most rapid increase of aging population occurs in the developing countries, which will see a jump of 140% by 2030 [1]. Moreover, the world’s population is expected to reach 9.3 billion by 2050 [2], and people who are above 60 years old will make up 28% of the population. Dealing with this situation will require huge financial resources to support the ever-increasing living cost, where human life expectancy is expected to reach 81 years by 2100.

As older people may have disorders of body functions or suffer from age-related diseases, the need for smart health assistance systems increases each year. A common method of monitoring geriatric patients is a physical observation, which is costly, requires a lot of human staff, and is increasingly infeasible in view of massive population aging in the following years. Many Ambient Assisted Living (AAL) applications such as care-providing robots, video surveillance systems, and assistive human-computer interaction technologies require human activity recognition. While the primary users of the AAL systems are of course the senior (elderly) people, the concept also applies to mentally and physically impaired people as well as people suffering from diabetes and obesity, who may need assistance at home, and people of any age interested in personal fitness monitoring. As a result, the sensor-based real-time monitoring system to support independent living at home has been a subject of many recent research studies in human activity recognition (HAR) domain [310].

Activity recognition can be defined as the process of how to interpret sensor data to classify a set of human activities [11]. HAR is a rapidly growing area of research that can provide valuable information on health, wellbeing, and fitness of monitored persons outside a hospital setting. Daily activity recognition using wearable technology plays a central role in the field of pervasive healthcare [12]. HAR has gained increased attention in the last decade due to the arrival of affordable and minimally invasive mobile sensing platforms such as smartphones. Smartphones are innovative platforms for HAR because of the availability of different wireless interfaces, unobtrusiveness, ease of use, high computing power and storage, and the availability of sensors, such as accelerometer, compass, and gyroscope, which meet the technical and practical hardware requirements for HAR tasks [1315]. Moreover, technological development possibilities of other applications are still arising, including virtual reality systems. Therefore, these machines present a great possibility for the development of innovative technology dedicated for the AAL systems.

One of the key motivating factors for using mobile phone-based human activity recognition in the AAL systems is the relationship and correlation between the level of physical activity and the level of wellbeing of a person. Recording and analysing precise information on the person’s activities are beneficial to keeping the progress and status of the disease (or mental condition) and can potentially improve the treatment of person’s conditions and diseases, as well as decreasing the cost of care. Recognizing indoor and outdoor activities such as walking, running, or cycling can be useful to provide feedback to the caregiver about the patient’s behaviour. When following the daily habits and routines of users, one can easily identify deviations from routines, which can assist the doctors in diagnosing conditions that would not be observed during routine medical examination. Another key enabler of the HAR technology is the possibility of providing independent living for the elderly as well as for patients with dementia and other mental pathologies, which could be monitored to prevent undesirable consequences of abnormal activities. Furthermore, by using persuasive techniques and gamification, HAR systems can be designed to interact with users to change their behaviour and lifestyles towards more active and healthier ones [16].

Recently, various intelligent systems based on mobile technologies have been constructed. HAR using smartphones or other types of portable or wearable sensor platforms has been used for assessing movement quality after stroke [17], such as upper extremity motion [18], for assessing gait characteristics of human locomotion for rehabilitation and diagnosis of medical conditions [19], for postoperative mobilization [20], for detecting Parkinson’s disease, back pain, and hemiparesis [21], for cardiac rehabilitation [22], for physical therapy, for example, if a user is correctly doing the exercises recommended by a physician [23, 24], for detecting abnormal activities arising due to memory loss for dementia care [25, 26], for dealing with Alzheimer’s [27] and neurodegenerative diseases such as epilepsy [28], for assessment of physical activity for children and adolescents suffering from hyperlipidaemia, hypertension, cardiovascular disease, and type 2 diabetes [29], for detecting falls [30, 31], for addressing physical inactivity when dealing with obesity [32], for analysing sleeping patterns [33], for estimating energy expenditures of a person to assess his/her healthy daily lifestyle [34], and for recognizing the user’s intent in the domain of rehabilitation engineering such as smart walking support systems to assist motor-impaired persons and the elderly [35].

In this paper, we propose a new method for offline recognition of daily human activities based on feature dimensionality reduction using random projections [36] to low dimensionality feature space and using the Jaccard distance between kernel density probabilities as a decision function for classification of human activities.

The structure of the remaining parts of the paper is as follows. Section 2 presents the overview of related work in the smartphone-based HAR domain with a particular emphasis on the features extracted from the sensor data. Section 3 describes the proposed method. Section 4 evaluates and discusses the results. Finally, Section 5 presents the conclusions and discusses future work.

2. Overview of HAR Features and Related Work

All tasks of the HAR domain require correct identification of human activities from sensor data, which, in turn, requires that features derived from sensor data must be properly categorized and described. Next, we present an overview of features used in the HAR domain.

2.1. Features

While numerous features can be extracted from physical activity signals, increasing the number of features does not necessarily increase classification accuracy since the features may be redundant or may not be class-specific:(i)Time domain features (such as mean, median, variance, standard deviation, minimum, maximum, and root mean square, applied to the amplitude and time dimensions of a signal) are typically used in many practical HAR systems because of being less computationally intensive; thus, they can be easily extracted in real time.(ii)Frequency-domain features require higher computational cost to distinguish between different human activities. Thus, they may not be suitable for real-time AAL applications.(iii)Physical features are derived from a fundamental understanding of how a certain human movement would produce a specific sensor signal. Physical features are usually extracted from multiple sensor axes, based on the physical parameters of human movements.

Based on the extensive analysis of the literature and features used by other authors (esp. by Capela et al. [17], Mathie et al. [37], and Zhang and Sawchuk [38]), we have extracted 99 features of data, which are detailed in Table 1.

Table 1: Catalogue of features.
2.2. Feature Selection

Feature selection is the process of selecting a subset of relevant features for use in construction of the classification model. Successful selection of features allows for simplification of models to make them easier to interpret, to decrease model training times, and to better understand difference between classes. Using feature selection allows removing redundant or irrelevant features without having an adverse effect on the classification accuracy. There are four basic steps in a typical feature selection method [58]: generation of candidate feature subset, an evaluation function for feature candidate subset, a generation stopping criterion, and a validation procedure.

Further, we analyse several feature selection methods used in the HAR domain.

ReliefF [59] is a commonly used filter method that ranks features by weighting them based on their relevance. Feature relevance is based on how well data instances are separated. For each data instance, the algorithm finds the nearest data point from the same class (hit) and nearest data points from different classes (misses).

Matlab’s Rankfeatures ranks features by a given class separability criterion. Class separability measures include the absolute value of a statistic of a two-sample -test, Kullback-Leibler distance, minimum attainable classification error, area between the empirical Receiver Operating Characteristic (ROC) curve and the random classifier slope, and the absolute value of the statistic of a two-sample unpaired Wilcoxon test. Measures are based on distributional characteristics of classes (e.g., mean, variance) for a feature.

Principal component analysis (PCA) is the simplest method to reduce data dimensionality. This reduced dimensional data can be used directly as features for classification. Given a set of features, a PCA analysis will produce new data variables (PCA components) as linear combinations of the features with the highest variance in the subspace orthogonal to the preceding PCA component. As variability of the data can be captured by a relatively small number of PCs, PCA can achieve high level of dimensionality reduction. Several extensions of the PCA method are known such as kernel PCA, sparse PCA, and multilinear PCA.

Correlation-based Feature Selection (CFS) [60] is a filter algorithm that ranks subsets of features by a correlation-based heuristic evaluation function. A feature is considered to be a good one if it is relevant to the target concept but is not redundant to any of the other relevant features. Goodness of measure is expressed by a correlation between features, and CFS chooses the subset of features which has the highest measure. The chosen subset holds the property that features inside this subset have high correlation with the class and are unrelated to each other.

Table 2 summarizes the feature selection/dimensionality reduction methods in HAR.

Table 2: Summary of feature selection/dimensionality reduction methods in HAR.

A comprehensive review of feature selection algorithms in general as well as in the HAR domain can be found in [58, 6163].

2.3. Summary

Related work in the HAR domain is summarized in Table 3. For each paper, the activities analysed, types of sensor data used, features extracted, classification method applied, and accuracy achieved (as given by the referenced papers) are given.

Table 3: Summary of related works in the HAR domain.

3. Method

3.1. General Scheme

The typical steps for activity recognition are preprocessing, segmentation, feature extraction, dimensionality reduction (feature selection), and classification [24]. The main steps of activity recognition include (a) preprocessing of sensor data (e.g., denoising), (b) feature extraction, (c) dimension reduction, and (d) classification. The preprocessing step includes noise removal and representation of raw data. The feature extraction step is used to reduce large input sensor data to a smaller set of features (feature vector), which preserves information contained in the original data. The dimensionality reduction step can be applied to remove the irrelevant (or less relevant) features and reduce the computational complexity and increase the performance of the activity recognition process. The classification step is used to map the feature set to a set of activities.

In this paper, we do not focus on data preprocessing and feature extraction but rather on dimensionality reduction and classification steps, since these two are crucial for further efficiency of AAL systems. The proposed method for human activity recognition is based on feature dimensionality reduction using random projections [36] and classification using kernel density function estimate as a decision function (Figure 1).

Figure 1: General scheme of the proposed method.
3.2. Description of the Method

During random projection, the original -dimensional data is projected to a -dimensional () subspace using a random matrix . The projection of the data onto a lower -dimensional subspace is , where is the original set of   -dimensional observations. In the derived projection, the distances between the points are approximately preserved, if points in a vector space are projected onto a randomly selected subspace of suitably high dimension (see the Johnson-Lindenstrauss lemma [64]). The random matrix is selected as proposed by Achlioptas [36] as follows:

Given the low dimensionality of the target space, we can treat the projection of low-dimensional observations onto each dimension as a set of random variables for which the probability density function (PDF) can be estimated using kernel density estimation (KDE) (or Parzen window) method [65].

If , is a sample of a random variable, then the kernel density approximation of its probability density function iswhere is some kernel and is the bandwidth (smoothing parameter). is taken to be a standard Gaussian function with mean zero and variance 1 of the examined data features:

For a two-dimensional case, the bivariate probability density function is calculated as a product of univariate probability functions as follows:Here, and are data in each dimension, respectively.

However, each random projection produces a different mapping of the original data points which reveals only a part of the data manifold in higher-dimensional space. In case of the binary classification problem, we are interested in a mapping that separates data points belonging to two different classes best.

As a criterion for estimating the mapping, we use the Jaccard distance metric between two probability density estimates of data points representing each class. The advantage of the Jaccard distance metric as compared to other metrics of distance such as Kullback-Leibler (KL) divergence and Hellinger distance is its adaptability to multidimensional spaces where compared points show relations to different subsets. Therefore, it is well adapted to the developed model of human activity features, where according to description in the previous section we have divided them into some sets of actions. Furthermore, the computational complexity of the Hellinger distance is very high, while KL divergence might be unbounded.

The Jaccard distance, which measures dissimilarity between sample sets, is obtained by subtracting the Jaccard coefficient from 1 or, equivalently, by dividing the difference of the sizes of the union and the intersection of two sets by the size of the union:

In the proposed model, the best random projection with the smallest overlapping area is selected (see an example in Figure 2).

Figure 2: Graphical illustration of good separation versus bad separation of kernel density estimation functions (Subject 1, Trial 1, Walking Forward versus Walking Upstairs; 2nd dimension).

To explore the performance and correlation among features visually, a series of scatter plots in a 2D feature space is shown in Figure 3. The horizontal and vertical axes represent two different features. The points in different colours represent different human activities.

Figure 3: Example of classification: walking versus running (Subject 1, Trial 1) classes randomly projected in a bidimensional feature subspace.

In case of multiple classes, the method works as a one-class classifier: recognizing instances of a positive class, while all instances of other classes are recognized as outliers of the positive class.

3.3. Algorithm

The pseudocode of the algorithms for finding the best projection and using it for classification in low-dimensional space is presented in Pseudocodes 1 and 2, respectively.

Pseudocode 1: Pseudocode of FindBestProjection.
Pseudocode 2: Pseudocode of binary classification.

4. Experiments

4.1. Dataset

To evaluate the performance of the proposed approach for HAR from the smartphone data, we used the part of the dataset (USC Human Activity Dataset [38]) recorded using the MotionNode device (sampling rate: 100 Hz; 3-axis accelerometer range: 6 g; 3-axis gyroscope range: 500 dps). The dataset consists of records recorded with 14 subjects (7 male, 7 female; age: 21–49) of 12 activities, 5 trials each. During data acquisition, MotionNode was attached on the front right hip of subjects.

The recorded low-level activities are as follows: Walking Forward (WF), Walking Left (WL), Walking Right (WR), Walking Upstairs (WU), Walking Downstairs (WD), Running Forward (RF), Jumping Up (JU), Sitting (Si), Standing (St), Sleeping (Sl), Elevator Up (EU), and Elevator Down (ED). Each record consists of the following attributes: date, subject number, age, height, weight, activity name, activity number, trial number, sensor location, orientation, and readings. Sensor readings consist of 6 readings: acceleration along -, -, and -axes and gyroscope along -, -, and -axes. Each trial was performed on different days at various indoor and outdoor locations.

4.2. Results

In Table 4, we describe the top three best features from Table 1 (see column Feature number) ranked by the Matlab Rankfeatures function using the entropy criterion.

Table 4: Top features for binary classification of human activities.

The results of feature ranking presented in Table 5 can be summarized as follows:(i)For Walking Forward, Walking Left, and Walking Right, the important features are moving variance of acceleration and gyroscope data, movement intensity of gyroscope data, moving variance of movement intensity of acceleration data, first eigenvalue of moving covariance between acceleration data, and polar angle of moving cumulative sum of gyroscope data.(ii)For Walking Upstairs and Walking Downstairs, moving variance of gyroscope along -axis, movement intensity of gyroscope data, and moving variance of movement intensity are the most important.(iii)For Running Forward, moving variance of 100 samples of acceleration along -axis, moving variance of 100 samples of gyroscope along -axis, and moving energy of acceleration are distinguishing features.(iv)For Jumping Up, the most important features are moving variance of acceleration, moving variance of movement intensity, and moving energy of acceleration.(v)For Sitting, movement intensity of gyroscope data and movement intensity of difference between acceleration and gyroscope data are the most important.(vi)For Standing, moving variance of movement intensity of acceleration data, moving variance of acceleration along -axis, and first eigenvalue of moving covariance of difference between acceleration and gyroscope data are the most distinctive.(vii)For Sleeping, the most prominent features are first eigenvalue of moving covariance between acceleration data and moving variance of movement intensity of acceleration data.(viii)For Elevator Up and Elevator Down, the most commonly selected feature is moving variance of -axis of gyroscope data. Other prominent features are first eigenvalue of moving covariance of difference between acceleration and gyroscope data and moving energy of -axis of gyroscope data.

Table 5: The confusion matrix of within-subject activity classification using Rankfeatures.

These results can be considered as consistent from what can be expected from the physical analysis of human motions in the analysed dataset.

The evaluation of HAR classification algorithms is usually made through the statistical analysis of the models using the available experimental data. The most common method is the confusion matrix which allows representing the algorithm performance by clearly identifying the types of errors (false positives and negatives) and correctly predicted samples over the test data.

The confusion matrix for within-subject activity recognition using Matlab’s Rankfeatures is detailed in Table 5. The classification was performed using 5-fold cross-validation, using 80% of data for training and 20% of data for testing. Grand mean accuracy is 0.9552; grand mean precision is 0.9670; grand mean sensitivity is 0.9482; grand mean specificity is 0.9569; grand mean recall is 0.9482; grand mean -score is 0.9482. The baseline accuracy was calculated using only the top 2 features selected by Rankfeatures, but without using random projections. The results show that features derived using random projections are significantly better than features derived using a common feature selection algorithm.

To take a closer look at the classification result, Table 5 shows the confusion table for classification of activities. The overall averaged recognition accuracy across all activities is 95.52%, with 11 out of 12 activities having accuracy values higher than 90%. If we examine the recognition performance for each activity individually, Running Forward, Jumping Up, and Sleeping will have very high accuracy values. For Running Forward, the accuracy of 99.0% is achieved. Interestingly, the lowest accuracy was achieved for Elevator Up activity, only 84.0%, while it was most often misclassified with Sitting and Standing. Elevator Down is misclassified with Elevator Up (only 69.7% accuracy). This result makes sense since Sitting on a chair, Standing, and Standing in a moving elevator are static activities, and we expect difficulty in differentiating different static activities. Also, there is some misclassification when deciding on a specific direction of activity; for example, Walking Left is confused with Walking Forward (77.4% accuracy) and Walking Upstairs (87.4% accuracy). Walking Upstairs is also confused with Walking Right (79.8% accuracy) and Walking Downstairs (70.8% accuracy). This is due to the similarity of any walk-related activities.

For comparison, the confusion matrix for within-subject activity recognition obtained using the proposed method with ReliefF feature selection is detailed in Table 6. The classification was performed using 5-fold cross-validation, using 80% of data for training and 20% of data for testing. Grand mean accuracy is 0.932; grand mean precision is 0.944; grand mean sensitivity is 0.939; grand mean specificity is 0.933; grand mean recall is 0.939; grand mean -score is 0.922.

Table 6: The confusion matrix of within-subject activity classification using ReliefF.

The baseline accuracy was calculated using only the top 2 features selected using ReliefF, but without using random projections. Again, the results show that features derived using random projections are significantly better than features derived using the ReliefF method only.

Surprisingly, though the classification accuracy of the specific activities differed, the mean accuracy metric results are quite similar (but still worse, if grand mean values are considered). The features identified using ReliefF feature selection were better at separating Walking Forward from Walking Left and Standing from Elevator Up activities but proved worse for separating other activities such as Sitting from Standing.

For subject identification, the data from all physical actions is used to train the classifier. Here, we consider one-versus-all subject identification problem. Therefore, the data of one subject is defined as positive class, and the data of all other subjects is defined as negative class. In this case, also 5-fold cross-validation was performed, using 80% of data for training and 20% of data for testing. The results of one-versus-all subject identification using all activities for training and testing are presented in Table 7. While the results are not very good, they still are better than random baselines: grand mean accuracy is 0.477; precision is 0.125; recall is 0.832; and -score is 0.210.

Table 7: Results of one-versus-all subject identification (all activities).

If an activity of a subject has been established, separate classifiers for each activity can be used for subject identification. In this case, also 5-fold cross-validation was performed, using 80% of data for training and 20% of data for testing, and the results are presented in Table 8. The grand mean accuracy is 0.720, which is better than random baseline. However, if we consider only the top three walking-related activities (Walking Forward, Walking Left, or Walking Right), the mean accuracy is 0.944.

Table 8: Results of one-versus-all subject identification for specific activities.

Finally, we can simplify the classification problem to binary classification (i.e., recognize one subject against another). This simplification can be motivated by the assumption that only a few people are living in an AAL home (far less than 14 subjects in the analysed dataset). Then, the data from a pair of subjects performing a specific activity is used for classification and training. Separate classifiers are built for each pair of subjects, the results are evaluated using 5-fold cross-validation, and the results are averaged. The results are presented in Table 9. Note that the grand mean accuracy has increased to 0.947, while, for the top three walking-related activities (Walking Forward, Walking Left, or Walking Right), the grand mean accuracy is 0.992.

Table 9: Accuracy of binary subject identification using separate activities.

5. Evaluation and Discussion

Random projections have been used in the HAR domain for data dimensionality reduction in activity recognition from noisy videos [69], feature compression for head pose estimation [70], and feature selection for activity motif discovery [71]. The advantages of random projections are the simplicity of their implementation and their scalability, robustness to noise, and low computational complexity: constructing the random matrix and projecting the data matrix into dimensions are of order .

The HAD dataset has been used in HAR research by other authors, too. Using the same HAD dataset, Zheng [66] has achieved 95.6% accuracy. He used the means and variances of magnitude and angles as the activity features and the magnitude and angles that were produced by a triaxial acceleration vector. Classifier used the Least Squares Support Vector Machine (LS-SVM) and Naïve-Bayes (NB) algorithm to distinguish different activity classes. Sivakumar [67] achieved 84.3% overall accuracy using symbolic approximation of time series of accelerometer and gyroscope signal. Vaka [68] achieved 90.7% accuracy for within-person classification and 88.6% accuracy for interperson classification using Random Forest. The features used for the recognition were time domain features: mean, standard deviation, correlation between and , correlation between and , correlation between and , and root mean square of a signal. Our results (95.52% accuracy), obtained using the proposed method, are very similar to the best results of Zheng for activity recognition task.

The results obtained by different authors using the USC-HAD dataset are summarized in Table 10.

Table 10: Summary of HAR results using USC-HAD dataset.

We think that it would be difficult to achieve even higher results due to some problems with the analysed dataset, which include a set of problems inherent to many Human Activity Datasets as follows:(i)Accurate Labelling of All Activities. Existing activity recognition algorithms usually are based on supervised learning where the training data depends upon accurate labelling of all human activities. Collecting consistent and reliable data is a very difficult task since some activities may have been marked by users with wrong labels.(ii)Transitionary/Overlapping Activities. Often people do several activities at the same time. The transition states (such as walking-standing, lying-standing) can be treated as additional states, and the recognition model can be trained with respect to these states to increase the accuracy.(iii)Context Problem. It occurs when the sensors are placed at an inappropriate position relative to the activity being measured. For example, with accelerometer-based HAR, the location where the phone is carried, such as in the pocket or in the bag, impacts the classification performance.(iv)Subject Sensitivity. It measures dependency of the trained classification model upon the specifics of user.(v)Weak Link between Basic Activities and More Complex Activities. For example, it is rather straightforward to detect whether the user is running, but inferring whether the user is running away from danger or jogging in a park is different.(vi)Spurious Data. Most published studies handle the problem of the fuzzy borders by manual data cropping.

6. Conclusion

Monitoring and recognizing human activities are important for assessing changes in physical and behavioural profiles of the population over time, particularly for the elderly and impaired and patients with chronic diseases. Although a wide variety of sensors are being used in various devices for activity monitoring, the positioning of the sensors, the selection of relevant features for different activity groups, and providing context to sensor measurements still pose significant research challenges.

In this paper, we have reviewed the stages needed to implement a human activity recognition method for automatic classification of human physical activity from on-body sensors. A major contribution of the paper lies in pursuing the random projections based approach for feature dimensionality reduction. The results of extensive testing performed on the USC-HAD dataset (we have achieved overall accuracy of within-person classification of 95.52% and interperson identification accuracy of 94.75%) reveal the advantages of the proposed approach. Gait-related activities (Walking Forward, Walking Left, and Walking Right) allowed the best identification of subjects opening the way for a multitude of applications in the area of gait-based identification and verification.

Future work will concern the validation of the proposed method using other datasets of human activity data as well as integration of the proposed method in the wearable sensor system we are currently developing for applications in indoor human monitoring.

Competing Interests

The authors declare that there are no competing interests regarding the publication of this paper.

Acknowledgments

The authors would like to acknowledge the contribution of the COST Action IC1303 AAPELE: Architectures, Algorithms and Platforms for Enhanced Living Environments.

References

  1. U. S. State Department and National Institute on Aging (NIA), Why Population Aging Matters: A Global Perspective, 2007.
  2. Department of Economic and Social Affairs and Population Division, World Population to 2300, United Nations, New York, NY, USA, 2004.
  3. P. Turaga, R. Chellappa, V. S. Subrahmanian, and O. Udrea, “Machine recognition of human activities: a survey,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 18, no. 11, pp. 1473–1488, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, no. 6, pp. 976–990, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. J. K. Aggarwal and M. S. Ryoo, “Human activity analysis: a review,” ACM Computing Surveys, vol. 43, no. 3, article 16, 2011. View at Publisher · View at Google Scholar
  6. S.-R. Ke, H. L. U. Thuc, Y.-J. Lee, J.-N. Hwang, J.-H. Yoo, and K.-H. Choi, “A review on video-based human activity recognition,” Computers, vol. 2, no. 2, pp. 88–131, 2013. View at Publisher · View at Google Scholar
  7. O. C. Ann, “Human activity recognition: a review,” in Proceedings of the of IEEE International Conference on Control System, Computing and Engineering (ICCSCE '14), pp. 389–393, Batu Ferringhi, Malaysia, November 2014. View at Publisher · View at Google Scholar
  8. J. K. Aggarwal and L. Xia, “Human activity recognition from 3D data: a review,” Pattern Recognition Letters, vol. 48, pp. 70–80, 2014. View at Publisher · View at Google Scholar
  9. M. Vrigkas, C. Nikou, and I. A. Kakadiaris, “A review of human activity recognition methods,” Frontiers in Robotics and AI, vol. 2, article 28, 2015. View at Publisher · View at Google Scholar
  10. M. Ziaeefard and R. Bergevin, “Semantic human activity recognition: a literature review,” Pattern Recognition, vol. 48, no. 8, pp. 2329–2345, 2015. View at Publisher · View at Google Scholar · View at Scopus
  11. O. D. Incel, M. Kose, and C. Ersoy, “A review and taxonomy of activity recognition on mobile phones,” BioNanoScience, vol. 3, no. 2, pp. 145–171, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. V. Osmani, S. Balasubramaniam, and D. Botvich, “Human activity recognition in pervasive health-care: supporting efficient remote collaboration,” Journal of Network and Computer Applications, vol. 31, no. 4, pp. 628–655, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. Ó. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,” IEEE Communications Surveys and Tutorials, vol. 15, no. 3, pp. 1192–1209, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. M. Shoaib, S. Bosch, O. D. Incel, H. Scholten, and P. J. M. Havinga, “A survey of online activity recognition using mobile phones,” Sensors, vol. 15, no. 1, pp. 2059–2085, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. J. L. R. Ortiz, Smartphone-Based Human Activity Recognition, Springer Theses, 2015.
  16. S. Purpura, V. Schwanda, K. Williams, W. Stubler, and P. Sengers, “Fit4Life: The design of a persuasive technology promoting healthy behavior and ideal weight,” in Proceedings of the 29th Annual CHI Conference on Human Factors in Computing Systems (CHI '11), pp. 423–432, ACM, New York, NY, USA, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. N. A. Capela, E. D. Lemaire, and N. Baddour, “Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients,” PLoS ONE, vol. 10, no. 4, Article ID e0124414, 2015. View at Publisher · View at Google Scholar · View at Scopus
  18. S. Patel, R. Hughes, T. Hester et al., “A novel approach to monitor rehabilitation outcomes in stroke survivors using wearable technology,” Proceedings of the IEEE, vol. 98, no. 3, pp. 450–461, 2010. View at Publisher · View at Google Scholar · View at Scopus
  19. W. Tao, T. Liu, R. Zheng, and H. Feng, “Gait analysis using wearable sensors,” Sensors, vol. 12, no. 2, pp. 2255–2283, 2012. View at Publisher · View at Google Scholar · View at Scopus
  20. G. Appelboom, B. E. Taylor, E. Bruce et al., “Mobile phone-connected wearable motion sensors to assess postoperative mobilization,” JMIR mHealth and uHealth, vol. 3, no. 3, article e78, 2015. View at Publisher · View at Google Scholar
  21. V. H. Cheung, L. Gray, and M. Karunanithi, “Review of accelerometry for determining daily activity among elderly patients,” Archives of Physical Medicine and Rehabilitation, vol. 92, no. 6, pp. 998–1014, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. N. Bidargaddi, A. Sarela, L. Klingbeil, and M. Karunanithi, “Detecting walking activity in cardiac rehabilitation by using accelerometer,” in Proceedings of the International Conference on Intelligent Sensors, Sensor Networks and Information (ISSNIP '07), pp. 555–560, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  23. L. Bao and S. S. Intille, “Activity recognition from user-annotated acceleration data,” in Pervasive Computing, A. Ferscha and F. Mattern, Eds., vol. 3001 of Lecture Notes in Computer Science, pp. 1–17, Springer, Berlin, Germany, 2004. View at Publisher · View at Google Scholar
  24. A. Avci, S. Bosch, M. Marin-Perianu, R. Marin-Perianu, and P. Havinga, “Activity recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey,” in Proceedings of the 23rd International Conference on Architecture of Computing Systems (ARCS 10), pp. 1–10, Hannover, Germany, February 2010.
  25. K. S. Gayathri, S. Elias, and B. Ravindran, “Hierarchical activity recognition for dementia care using Markov Logic Network,” Personal and Ubiquitous Computing, vol. 19, no. 2, pp. 271–285, 2015. View at Publisher · View at Google Scholar · View at Scopus
  26. C. Phua, P. C. Roy, H. Aloulou et al., “State-of-the-art assistive technology for people with dementia,” in Handbook of Research on Ambient Intelligence and Smart Environments: Trends and Perspectives, N.-Y. Chong and F. Mastrogiovanni, Eds., chapter 16, pp. 300–319, IGI Global, 2011. View at Publisher · View at Google Scholar
  27. P. C. Roy, S. Giroux, B. Bouchard et al., “A possibilistic approach for activity recognition in smart homes for cognitive assistance to Alzheimer's patients,” in Activity Recognition in Pervasive Intelligent Environments, vol. 4 of Atlantis Ambient and Pervasive Intelligence, pp. 33–58, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar
  28. A. Hildeman, Classification of epileptic seizures using accelerometers [Ph.D. dissertation], Chalmers University of Technology, Gothenburg, Sweden, 2011.
  29. M. R. Puyau, A. L. Adolph, F. A. Vohra, I. Zakeri, and N. F. Butte, “Prediction of activity energy expenditure using accelerometers in children,” Medicine and Science in Sports and Exercise, vol. 36, no. 9, pp. 1625–1631, 2004. View at Publisher · View at Google Scholar · View at Scopus
  30. P. Gupta and T. Dallas, “Feature selection and activity recognition system using a single triaxial accelerometer,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 6, pp. 1780–1786, 2014. View at Publisher · View at Google Scholar · View at Scopus
  31. A. T. Özdemir and B. Barshan, “Detecting falls with wearable sensors using machine learning techniques,” Sensors, vol. 14, no. 6, pp. 10691–10708, 2014. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Arif and A. Kattan, “Physical activities monitoring using wearable acceleration sensors attached to the body,” PLoS ONE, vol. 10, no. 7, Article ID e0130851, 2015. View at Publisher · View at Google Scholar · View at Scopus
  33. G. G. Alvarez and N. T. Ayas, “The impact of daily sleep duration on health: a review of the literature,” Progress in Cardiovascular Nursing, vol. 19, no. 2, pp. 56–59, 2004. View at Publisher · View at Google Scholar · View at Scopus
  34. S. E. Crouter, J. R. Churilla, and D. R. Bassett Jr., “Estimating energy expenditure using accelerometers,” European Journal of Applied Physiology, vol. 98, no. 6, pp. 601–612, 2006. View at Publisher · View at Google Scholar · View at Scopus
  35. H. Yu, M. Spenko, and S. Dubowsky, “An adaptive shared control system for an intelligent mobility aid for the elderly,” Autonomous Robots, vol. 15, no. 1, pp. 53–66, 2003. View at Publisher · View at Google Scholar · View at Scopus
  36. D. Achlioptas, “Database-friendly random projections,” in Proceedings of the 20th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems (PODS '01), pp. 274–281, Santa Barbara, Calif, USA, May 2001. View at Scopus
  37. M. J. Mathie, B. G. Celler, N. H. Lovell, and A. C. F. Coster, “Classification of basic daily movements using a triaxial accelerometer,” Medical and Biological Engineering and Computing, vol. 42, no. 5, pp. 679–687, 2004. View at Publisher · View at Google Scholar · View at Scopus
  38. M. Zhang and A. A. Sawchuk, “USC-HAD: a daily activity dataset for ubiquitous activity recognition using wearable sensors,” in Proceedings of the International Conference on Ubiquitous Computing (UbiComp '12), pp. 1036–1043, ACM, September 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. L. Atallah, B. Lo, R. King, and G.-Z. Yang, “Sensor positioning for activity recognition using wearable accelerometers,” IEEE Transactions on Biomedical Circuits and Systems, vol. 5, no. 4, pp. 320–329, 2011. View at Publisher · View at Google Scholar · View at Scopus
  40. A. Bayat, M. Pomplun, and D. A. Tran, “A study on human activity recognition using accelerometer data from smartphones,” Procedia Computer Science, vol. 34, pp. 450–457, 2014. View at Publisher · View at Google Scholar
  41. M. Berchtold, M. Budde, D. Gordon, H. R. Schmidtke, and M. Beigl, “ActiServ: activity recognition service for mobile phones,” in Proceedings of the 14th IEEE International Symposium on Wearable Computers (ISWC '10), pp. 1–8, IEEE, Seoul, South Korea, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  42. A. Henpraserttae, S. Thiemjarus, and S. Marukatat, “Accurate activity recognition using a mobile phone regardless of device orientation and location,” in Proceedings of the International Conference on Body Sensor Networks (BSN '11), pp. 41–46, IEEE, Dallas, Tex, USA, May 2011. View at Publisher · View at Google Scholar · View at Scopus
  43. E. Hoque and J. Stankovic, “AALO: activity recognition in smart homes using Active Learning in the presence of Overlapped activities,” in Proceedings of the 6th International Conference on Pervasive Computing Technologies for Healthcare and Workshops (PervasiveHealth '12), pp. 139–146, May 2012. View at Publisher · View at Google Scholar · View at Scopus
  44. T. Iso and K. Yamazaki, “Gait analyzer based on a cell phone with a single three-axis accelerometer,” in Proceedings of the 8th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '06), pp. 141–144, Espoo, Finland, September 2006. View at Publisher · View at Google Scholar · View at Scopus
  45. M. Kose, O. D. Incel, and C. Ersoy, “Online human activity recognition on smart phones,” in Proceedings of the Workshop on Mobile Sensing: From Smartphones and Wearables to Big Data (Colocated with IPSN), pp. 11–15, Beijing, China, April 2012.
  46. J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition using cell phone accelerometers,” ACM SIGKDD Explorations Newsletter, vol. 12, no. 2, pp. 74–82, 2011. View at Publisher · View at Google Scholar
  47. N. Lane, M. Mohammod, M. Lin et al., “Bewell: a smartphone application to monitor, model and promote wellbeing,” in Proceedings of the 5th International ICST Conference on Pervasive Computing Technologies for Healthcare, pp. 23–26, IEEE, 2012.
  48. Y. S. Lee and S. Cho, “Activity recognition using hierarchical hidden markov models on a smartphone with 3d accelerometer,” in Hybrid Artificial Intelligent Systems: 6th International Conference, HAIS 2011, Wroclaw, Poland, May 23–25, 2011, Proceedings, Part I, vol. 6678 of Lecture Notes in Computer Science, pp. 460–467, Springer, Berlin, Germany, 2011. View at Publisher · View at Google Scholar
  49. A. Mannini and A. M. Sabatini, “Machine learning methods for classifying human physical activity from on-body accelerometers,” Sensors, vol. 10, no. 2, pp. 1154–1175, 2010. View at Publisher · View at Google Scholar · View at Scopus
  50. U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, “Activity recognition and monitoring using multiple sensors on different body positions,” in Proceedings of the International Workshop on Wearable and Implantable Body Sensor Networks (BSN '06), pp. 113–116, IEEE, Cambridge, Mass, USA, April 2006. View at Publisher · View at Google Scholar · View at Scopus
  51. E. Miluzzo, N. D. Lane, K. Fodor et al., “Sensing meets mobile social networks: the design, implementation and evaluation of the CenceMe application,” in Proceedings of the 6th ACM Conference on Embedded Networked Sensor Systems (SenSys '08), pp. 337–350, Raleigh, NC, USA, November 2008. View at Publisher · View at Google Scholar · View at Scopus
  52. J. Pärkkä, M. Ermes, P. Korpipää, J. Mäntyjärvi, J. Peltola, and I. Korhonen, “Activity classification using realistic data from wearable sensors,” IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 119–128, 2006. View at Publisher · View at Google Scholar · View at Scopus
  53. T. S. Saponas, J. Lester, J. Froehlich, J. Fogarty, and J. Landay, iLearn on the iPhone: Real-Time Human Activity Classification on Commodity Mobile Phones, 2008.
  54. P. Siirtola and J. Röning, “Recognizing human activities user-independently on smartphones based on accelerometer data,” International Journal of Interactive Multimedia and Artificial Intelligence, vol. 1, no. 5, pp. 38–45, 2012. View at Publisher · View at Google Scholar
  55. T. Sohn, A. Varshavsky, A. LaMarca et al., “Mobility detection using everyday GSM traces,” in UbiComp 2006: Ubiquitous Computing: 8th International Conference, UbiComp 2006 Orange County, CA, USA, September 17–21, 2006 Proceedings, vol. 4206 of Lecture Notes in Computer Science, pp. 212–224, Springer, Berlin, Germany, 2006. View at Publisher · View at Google Scholar
  56. J. Yang, “Toward physical activity diary: motion recognition using simple acceleration features with mobile phones,” Proceedings of the 1st International Workshop on Interactive Multimedia for Consumer Electronics (IMCE '09), 2009. View at Publisher · View at Google Scholar
  57. C. Zhu and W. Sheng, “Motion- and location-based online human daily activity recognition,” Pervasive and Mobile Computing, vol. 7, no. 2, pp. 256–269, 2011. View at Publisher · View at Google Scholar · View at Scopus
  58. H. Liu and L. Yu, “Toward integrating feature selection algorithms for classification and clustering,” IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 4, pp. 491–502, 2005. View at Publisher · View at Google Scholar · View at Scopus
  59. M. Robnik-Šikonja and I. Kononenko, “Theoretical and empirical analysis of ReliefF and RReliefF,” Machine Learning, vol. 53, no. 1-2, pp. 23–69, 2003. View at Publisher · View at Google Scholar · View at Scopus
  60. M. A. Hall, Correlation-based feature selection for machine learning [Ph.D. thesis], The University of Waikato, 1999.
  61. A. Jain and D. Zongker, “Feature selection: evaluation, application, and small sample performance,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 2, pp. 153–158, 1997. View at Publisher · View at Google Scholar · View at Scopus
  62. I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” Journal of Machine Learning Research, vol. 3, pp. 1157–1182, 2003. View at Google Scholar · View at Scopus
  63. S. Pirttikangas, K. Fujinami, and T. Nakajima, “Feature selection and activity recognition from wearable sensors,” in Ubiquitous Computing Systems, H. Y. Youn, M. Kim, and H. Morikawa, Eds., vol. 4239 of Lecture Notes in Computer Science, pp. 516–527, Springer, Berlin, Germany, 2006. View at Publisher · View at Google Scholar
  64. W. B. Johnson and J. Lindenstrauss, “Extensions of Lipshitz mapping into Hilbert space,” in Conference in Modern Analysis and Probability, vol. 26 of Contemporary Mathematics, pp. 189–206, American Mathematical Society, 1984. View at Google Scholar
  65. E. Parzen, “On estimation of a probability density function and mode,” The Annals of Mathematical Statistics, vol. 33, no. 3, pp. 1065–1076, 1962. View at Publisher · View at Google Scholar · View at MathSciNet
  66. Y. Zheng, “Human activity recognition based on the hierarchical feature selection and classification framework,” Journal of Electrical and Computer Engineering, vol. 2015, Article ID 140820, 9 pages, 2015. View at Publisher · View at Google Scholar · View at Scopus
  67. A. Sivakumar, Geometry aware compressive analysis of human activities: application in a smart phone platform [M.S. thesis], Arizona State University, Tempe, Ariz, USA, 2014.
  68. P. R. Vaka, A pervasive middleware for activity recognition with smartphones [M.S. thesis], University of Missouri-Kansas, 2015.
  69. D. Tran and A. Sorokin, “Human activity recognition with metric learning,” in Computer Vision—ECCV 2008, D. Forsyth, P. Torr, and A. Zisserman, Eds., vol. 5302 of Lecture Notes in Computer Science, pp. 548–561, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar
  70. D. Lee, M.-H. Yang, and S. Oh, “Fast and accurate head pose estimation via random projection forests,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV '15), pp. 1958–1966, IEEE, Santiago, Chile, December 2015. View at Publisher · View at Google Scholar
  71. L. Zhao, X. Wang, G. Sukthankar, and R. Sukthankar, “Motif discovery and feature selection for CRF-based activity recognition,” in Proceedings of the 20th International Conference on Pattern Recognition (ICPR '10), pp. 3826–3829, Istanbul, Turkey, August 2010. View at Publisher · View at Google Scholar · View at Scopus