About this Journal Submit a Manuscript Table of Contents
International Journal of Distributed Sensor Networks
Volume 2013 (2013), Article ID 385276, 9 pages
http://dx.doi.org/10.1155/2013/385276
Research Article

A Dynamic Approach to Recognize Activities in WSN

Department of Computer Science, Yonsei University, 134 Shinchon-dong, Seodaemun-gu, Seoul 120-749, Republic of Korea

Received 11 November 2012; Accepted 1 April 2013

Academic Editor: Wan-Young Chung

Copyright © 2013 Muhammad Arshad Awan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The essence of context awareness has changed the revolution of ubiquitous computing, and the wireless sensor network technologies paved the way towards many applications. Activity recognition is a key component in identifying the context of a user for providing services based on the application. In this study, we propose a context management model that is based on activity recognition. The model is composed of four components: a set of sensors, a set of activities, a backend server with machine learning algorithms, and a GUI application for the interaction with the user. A prototype is developed to show the usability of the proposed model. As a pilot testing, only accelerometer data of an Android phone is used to identify the activities of daily living (ADLs): sitting, standing, walking, and jogging. A good accuracy of results that is about 96% on average is achieved in all activities.

1. Introduction

Context awareness is an essential part of ubiquitous and pervasive computing, and activity recognition is the key component in context management. With the advancement in wireless sensors network technology, the task of context-awareness has become achievable to some extent but still needs a lot more efforts to design and develop more generic models that fit into many applications. The advances in wireless sensors and sensors network, pervasive computing, and artificial intelligence have contributed a lot to overcome the challenges we face in our daily life [1]. By using these technologies, we might design and develop systems that can meet the requirements of the users according to their needs and current situation.

The design and development of generic context-aware systems is not an easy task to achieve because of the diverse nature of applications and their demands. As mentioned, activity recognition is a key to identify the context; therefore, various systems including the wearable computing technology have been developed to recognize the users’ activities and context [26]. The most acceptable definition of context in the research community is defined by Abowd et al. [7] as “any information that can be used to characterize the situation of entities (i.e., whether a person, place or object) that are considered relevant to the interaction between a user and an application, including the user and the application themselves.” Each entity is categorized into identity, location, time, and activity. The categories identity, time, and location can be identified easily up to acceptable accuracy by using GPS, compass, and other technologies. The major focus of the researchers in the context management is towards the activity recognition which is more complex and challengeable.

Although a number of applications using different techniques and methods have been developed to identify the context of a user through activity recognition, these cannot have the flexibility to identify new activities dynamically and leads to application specific approach. In our study, we propose an approach through which new activities as desired can be added to the system for future identification. The idea is to develop a system as a generic (not fully automatic) approach that could be deployed in wireless sensors network environments depending upon the available resources, infrastructure, and technology.

For the recognition of activities, the system has a defined set of data models (data model and algorithm are used interchangeably at some places) trained for specific activities identification using available sensors data. In order to cope with the new requirement, first add a new activity and a set of sensors categorized for that activity into the system, perform that activity, get the data, train the data model for that activity, and implement the algorithm which provides the best results for future recognition of the same activity. The training and development of data model through machine learning algorithms is performed on the backend server due to extensive resource utilization of supervised learning approaches. Once the data model is developed, it could be implemented on smartphone or any other device for interaction with the user.

To prove the usability of the approach, a prototype is developed. ADL including sitting, standing, walking, and jogging are recognized on real data after developing the data model through training phase. Only accelerometer sensor of smartphone is used for these activities recognition. J48, logistic regression, and Naïve Bayes algorithms are used for comparison based on the previous finding in the literature.

The remainder of the paper is organized as follows. Some related works and comparative contribution points are presented in Section 2. Section 3 describes the design and features of the proposed model. In Section 4, implementation of the prototype is described, that is, data acquisition, data classification, and so forth. Section 5 describes experiments and results. Finally, the conclusion and future work are presented in the last section.

2. Related Work

Activity recognition as the basic component in context acquisition has become a hot research topic by providing personalized support in many applications in general and especially in healthcare. The data obtained from sensors is manipulated through machine learning algorithms to infer the user current activity. The activity recognition system can be categorized into two broad categories [8]: video-sensor-based activity recognition (VSAR) and physical-sensor-based activity recognition (PSAR). Physical-sensor-based AR can be further divided into wearable-sensor-based activity recognition (WSAR) and object-usage-based AR (OUAR). We are here concerned with physical-sensor-based activity recognition, so a brief overview of the related work in this area is presented.

As mentioned, healthcare monitoring applications have become prominent in this field of research. These applications can be divided into 5 categories [9]: ADL, fall and movement detection, location tracking, medication intake, and medical status monitoring. Jafari et al. [10] proposed a methodology to identify the fall of a patient among other movement activities using 3-axis accelerometer as a wearable and through mobile platform. Postural orientation techniques [11] are used to identify the sit-stand, stand-sit, lie-stand, and stand-lie movements. Neural networks and k-nearest neighbor classifications are used to achieve the accuracy up to 84%. Similarly, to distinguish between fall and ADL, a study [12] was conducted using triaxial accelerometer.

A debate in the research community regarding the use of sensors for better results as wearable versus deployed in the environment and less versus more sensors is continued, but still there is no optimized solution. Different approaches have their own advantages and limitations. Bao and Intille [4] developed and evaluated algorithms to determine the daily activities of human using 5 biaxial accelerometers which are worn on the different parts of the subject. They support the usage of more sensors on different parts of the body for better results and achieve an average of 84% accuracy in determining the different activities. A work on identifying the user activities using cell phone accelerometers [5] under the Wireless Sensor Data Mining (WISDM) [13] project was conducted to identify the daily activities including sitting, standing, walking, jogging, ascending stairs, and descending stairs. Supervised learning approach was used by getting the data from 29 subjects, and an accuracy of above 90% is achieved. The authors in the study [5] claimed that a better accuracy can be achieved even with less number of sensors by accumulating a large data as opposed to the study in [4].

Much more work has been conducted in the area that can be found in the literature. A survey on context-aware systems was presented in [14] to show the common architecture principles of the context-aware systems. A conceptual layered framework that is common in many context-aware systems is presented as from bottom to top: sensors → raw data retrieval → preprocessing → storage/management → application. A review of sensor-based activity recognition systems and a survey on wireless sensor networks for healthcare are presented in [8, 9], respectively. A number of referred papers from these studies can be studied for a more in depth knowledge in the area.

Almost all of the systems have a predefined set of activities and infrastructure of a defined set of sensors to predict the users’ activities, lacking in the flexibility of the system to add new activities. New activities cannot be predicted without having major changes in the system. We have designed and tried to develop a full working model that caters all the basic characteristics of these systems and be adaptive to new requirements. The idea is to categorize all possible available sensors into a defined classification based on the activities that can be predicted through these sensors. New sensors can be added to the system through available universal interfaces in the wireless technology, and new activities can be added dynamically to handle the new situations in the system. Through the system, we can add new activity, select sensors for data, perform activity to train the data model, and implement the data model for the prediction of new added activity in future.

3. Proposed Model

The present study aims to develop a context management system based on activity recognition in the wireless sensors network environment. Basically, the proposed system is designed to achieve the following basic functionalities:(i)predict an activity that is already registered in the system;(ii)update the context of the user based on the identified activity;(iii)add a new activity to the system through a software application;(iv)select a set of sensors for getting the data for the specific activity;(v)store the data locally on the device (smartphone, PDA, etc.) while performing the activity;(vi)send the data to the server for training based on supervised learning;(vii)implement the algorithm on the device for the prediction of new activity in future;(viii)show the status of the user while running the application on the device.

The basic architecture of the system is presented in Figure 1. It is composed of four major components including a set of sensors, a set of activities, a backend server for machine learning algorithms’ processing, and an application to interact with the system. A brief description of these components is presented in the proceeding subsections. When the application is started, it predicts the user activity, updates the context, and shows the user’s status if the activity is identified. If the activity is new, it can be added and sensors can be selected to predict that activity. New activity is performed while sensors are taking and storing the data to a storage device. The data is processed through machine-learning algorithms, and algorithm is implemented for the future prediction of the activity. The flow of information is presented in Figure 2, in the form of a flowchart.

385276.fig.001
Figure 1: System overview.
385276.fig.002
Figure 2: System flowchart.
3.1. A Set of Sensors

In order to cover the maximum activities in daily life, we need to define a comprehensive list of sensors along with their characteristics to relate with the activities. It is not very straightforward, since the usage of the sensors depends on many factors especially on the type of application. A comprehensive classification of sensors is shown in Table 1 based on measurands, technological aspects, detection means, conversion phenomena, materials, and field of application that was presented by White [15].

tab1
Table 1: Sensors’ classification [15].

It is not an easy job to relate the ADL with the sensors based on the classification of the sensors. An interdisciplinary knowledge is required to associate some of activities with the sensors based on their characteristics. Dishongh and McGrath [16] presented, as in Table 2, the use of wireless sensors network technology in healthcare applications. They described a comprehensive list of sensors, their signal type, sample data rate, and behavioral biomarker used in daily life activities and particularly in health applications.

tab2
Table 2: Usage of sensors [16].

The idea behind the design of the generic model is to start from the identification of daily living activities through basic sensors and provide the flexibility in the system to add new sensors by using wireless technology like ZigBee [17]. Adding new activities and introducing new sensors keeping in view the wireless technology can lead to the flexibility of implementation of new applications, but there is an overlap among the activities and associated sensors. A single activity can be identified by using more than one sensor (using one at a time), and a single sensor can be used to identify different activities (one at a time). Similarly, some complex activities cannot be determined by a single sensor, so the exact mapping of sensors to activities is a big question. As a starting point, we are considering basic sensing functions, for example, movement, force, light, temperature, humidity, audio, and proximity to identify the common daily living activities.

3.2. A Set of Activities

As mentioned in the previous section that there is no exact relationship among daily living activities and type of sensors to measure or identify those activities. A number of articles have been written on the identification of daily living activities using wearable sensors and/or placed in the environment [18]. Different activities in the smart home environment have been detected, that is, preparing snack, breakfast, lunch and dinner, listening music, taking medication, toileting, cleaning, and so forth.

In our approach, we are trying to develop a system through which we can add new activities dynamically to meet the desired requirements. First, we are considering the basic daily living activities like sitting, standing, walking, and jogging. With the experiments, we can better judge to associate a particular set of activities and required sensors.

3.3. Backend Server

The supervised learning approach is being utilized in our system that requires heavy resources for the execution of machine learning algorithms that is why a backend server is required. Once the data is collected through sensors for a particular activity, it transfers to the server for the training of the data model. WEKA [19] is a collection of machine learning algorithms for data mining which is being used to train and test the data model. Once the data model is implemented, it has to be transferred to the device, that is, smartphone, PDA, and so forth for the prediction of activity in future.

3.4. GUI Application

An application is required through which the user can interact with the system. The main features of the applications are as follows:(i)show the status of the user based on activity identification;(ii)add a new activity and select sensors if required;(iii)collect the data while performing the activity and store locally;(iv)send the data to backend server for training and development of the data model;(v)implement the algorithm for future prediction of the activity.

4. Prototype

This is an initial effort to identify the user activity as a part of our context-awareness research system. The full fledge implementation of the model will definitely take time. In order to show the usability of the system, we have developed an application for Android platform having the basic functionalities:(i)add a new activity that annotate the sensors’ data;(ii)select a sensor (currently only from the Android device);(iii)collect and store the data locally on the device (CSV format) while performing activity;(iv)send data to the backend server for training and developing the data model;(v)implement the data model on server and send it to the device for future prediction of activity.

Figure 3 shows the procedure of our method applied for this application.

385276.fig.003
Figure 3: Procedure of applied method.
4.1. Data Acquisition

Accelerometer is a common sensor in most of mobile devices in the market. We used Google Nexus S to calculate the , , and values of the accelerometer. Eclipse [20], IDE, and Android APIs are used to develop a simple software application to collect the data and store it in a simple text file as CSV format. As an initial step to our study, we looked into only four basic ADLs, that is, sitting, standing, walking, and jogging.

The activities are performed by two persons, and each activity is carried out 6~10 minutes. The activities are repeated for the reliability of the data. The carrying of phone varies from person to person; we carry the Android phone in front pants leg pocket facing the screen outwards while performing activities for these initial experiments. Later on, we could do experiments by placing the phone in different positions. In all the activities, we collected accelerometer data every 20 ms and 50 samples per second. We collected the mean value of a second for each activity. The data is then sent to the backend server for further processing.

4.2. Data Classification

The WEKA [19] tool is used for the classification of data, and three algorithms including J48, logistic regression, and Naïve Bayes are used. The selection of these algorithms is based on previous work [5, 6, 21] on accelerometer data which reported the good accuracy in activity recognition. The acquired data is used to train the data model for activity recognition, and then the data model is implemented for the future prediction of the activity. The detailed experiments using mentioned algorithms are explained in the next section. While running the application, the accelerometer data is continuously obtained, and the corresponding activity is recognized. Figure 4 shows the data acquisition and classification procedure.

385276.fig.004
Figure 4: Data acquisition and classification procedure.

5. Experiments and Results

This section describes the experiments we performed, and then a discussion on the results of activity recognition follows. As described in the previous section, we collected the data through smartphone accelerometer and sent it to the server for classification process. Graphs are plotted on the data obtained from the conducted experiments. The samples of data in the form of graphs are shown in Figure 5. The -axis represents the time, and -axis represents the accelerometer values of , , and .

fig5
Figure 5: Data behavior for the four activities.

The behavior of data in different activities can be seen in Figure 5. It has been observed that the data pattern seems to be steady in case of sitting, standing, and walking activities for all , , and values, whereas it shows large variations in case of jogging activity. The reason of this behavior can be understood by knowing the -, -, and -axes on the smartphone representing the -, -, and -axes of accelerometer data as in Figure 6 [22]. As mentioned earlier, the -, - and -axes values are dependent on the position of the smartphone with the user. The Android phone was carried in front pants leg pocket facing the screen outwards while performing activities. As we can see from Figure 6 that if the phone is stable on a table, we should have the -axis value that is equal to earth gravity, and - and -axis values should be zero, but the optimization of results depends on some filtering processes and integration of the gyroscope and compass data to minimize the ambient affects on acceleration. We are using only the accelerometer data and applying the classification rules by machine learning algorithms.

385276.fig.006
Figure 6: Showing -, - and -axes on smartphone [22].

In case of sitting activity, we can see that -, - and -axis values are stable and - and -axis values are close to zero and -axis is approaching to earth gravity value. The position of smartphone is in front pants leg pocket facing the screen outwards. The -, - and -axes are also stable in the standing position because there is no linear acceleration, and we can see that there is change in the - and -axis values as compared to sitting activity because of the position of the phone. In case of walking, the values are stable because there is no big linear acceleration in axis, but these values may vary from person to person based on the height and walking speed of a person. There is more fluctuation in jogging activity because of rapid change in acceleration.

The results obtained through WEKA tool by using J48, logistic regression, and Naïve Bayes algorithms for classification process are given in Tables 3, 4, and 5. Tenfold cross-validation is used for all experiments.

tab3
Table 3: Confusion matrix for J48.
tab4
Table 4: Confusion matrix for logistic regression.
tab5
Table 5: Confusion matrix for Naïve Bayes.

J48 is the extended version of J45 decision tree learner implemented by WEKA tool [23]. Table 3 presents the confusion matrix for J48. The matrix in Table 3 shows that 305 instances are correctly and 9 instances are incorrectly classified. An overall accuracy of 97.13% is achieved. 100% accuracy is achieved in walking activity, whereas 94.73%, 98.01%, and 96.15% accuracies are achieved in sitting, standing, and jogging activities, respectively.

Table 4 represents the confusion matrix for logistic regression algorithm. An overall accuracy of 93.31% is achieved by correctly classifying 293 instances out of 314 instances. Standing and walking activities achieved 100% accuracy by this method, whereas an accuracy of 96.49% in sitting and 81.73% in jogging is achieved.

In case of Naïve Bayes algorithm, we achieved an overall accuracy of 98.72%. The confusion matrix for this approach is given in Table 5. From total 314 instances, 310 are classified correctly. In this approach, 100% accuracy is achieved in the case of jogging activity. Similar to logistic regression approach, an accuracy of 96.49% is achieved in sitting activity. Accuracy of 99% in standing and 98% in walking activity is achieved.

By summarizing the results achieved through all three algorithms, it can be seen that there is no big difference in identifying the activities which have regular data patterns as shown in the graphs of sitting, standing, and walking activities. However, the Naïve Bayes algorithm provides a better accuracy in jogging activity which does not have the steady patterns in data instances. The percentage of results correctly identified by all three algorithms is presented in Table 6.

tab6
Table 6: Accuracies of activity recognition.

6. Conclusion and Future Work

In this paper, we proposed a context management model based on activity recognition in wireless sensors network technology. The model is composed of 4 basic components: a set of sensors, a set of activities, a backend server with machine learning algorithms, and a GUI application to interact with the system. The usability of the model is tested by a simple prototype through a smartphone to predict the basic daily living activities of the user. As an initial study, we used only the accelerometer of Android phone to predict sitting, standing, walking, and jogging activities. An accuracy of up to 96% on average to predict the activities is achieved during these initial experiments. A prominent advantage of the proposed model is to predict the new activities for the context management systems through the combination of sensors and adapting the system with the most suitable set of sensors for a particular set of activities that provide good results.

We have initiated the research process of context-awareness, and this study is the starting point. Later on, we will do more experiments to predict complex ADL not only with the use of sensors in mobile devices but also on the body of the user and in environment.

Acknowledgment

This work was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (MEST) (2012081659).

References

  1. D. J. Cook, J. C. Augusto, and V. R. Jakkula, “Ambient intelligence: technologies, applications, and opportunities,” Pervasive and Mobile Computing, vol. 5, no. 4, pp. 277–298, 2009. View at Publisher · View at Google Scholar · View at Scopus
  2. T. L. M. van Kasteren, G. B. Englebienne, and J. A. Kröse, “Human activity recognition from wireless sensor network data: benchmark and software,” Activity Recognition in Pervasive Intelligent Environments, vol. 4, pp. 165–186, 2011.
  3. S. Wang, J. Yang, N. Chen, X. Chen, and Q. Zhang, “Human activity recognition with user-free accelerometers in the sensor networks,” in Proceedings of the International Conference on Neural Networks and Brain Proceedings (ICNNB '05), pp. 1212–1217, October 2005. View at Scopus
  4. L. Bao and S. S. Intille, “Activity recognition from user-annotated acceleration data,” in Pervasive, vol. 3001 of Lecture Notes in Computer Science, pp. 1–17, 2004.
  5. J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition using cell phone accelerometers,” ACM SIGKDD Explorations Newsletter, vol. 12, pp. 74–82, 2010.
  6. K. Oh, H. S. Park, and S. B. Cho, “A mobile context sharing system using activity and emotion recognition with Bayesian networks,” in Proceedings of the 7th IEEE International Conference on Ubiquitous Intelligence and Computing (UIC '10) and the 7th International Conference on Autonomic and Trusted Computing (ATC '10), pp. 244–249, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  7. G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles, “Towards a better understanding of context and context-awareness,” Handheld and Ubiquitous Computing, vol. 1707, pp. 304–307, 1999.
  8. D. Guan, T. Ma, W. Yuan, Y. K. Lee, and A. M. Jehad Sarkar, “Review of sensor-based activity recognition systems,” IETE Technical Review, vol. 28, pp. 418–433, 2011.
  9. H. Alemdar and C. Ersoy, “Wireless sensor networks for healthcare: a survey,” Computer Networks, vol. 54, no. 15, pp. 2688–2710, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. R. Jafari, W. Li, R. Bajcsy, S. Glaser, and S. Sastry, “Physical activity monitoring for assisted living at home,” in Proceedings of the International Conference on Body Sensor Networks (BSN '07), pp. 213–219, 2007.
  11. D. M. Karantonis, M. R. Narayanan, M. Mathie, N. H. Lovell, and B. G. Celler, “Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring,” IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 1, pp. 156–167, 2006. View at Publisher · View at Google Scholar · View at Scopus
  12. A. K. Bourke, J. V. O'Brien, and G. M. Lyons, “Evaluation of a threshold-based tri-axial accelerometer fall detection algorithm,” Gait and Posture, vol. 26, no. 2, pp. 194–199, 2007. View at Publisher · View at Google Scholar · View at Scopus
  13. WISDM (Wireless Sensor Data Mining) Project, Fordham University, Department of Computer, and Information Science, http://www.cis.fordham.edu/wisdm/.
  14. M. Baldauf, S. Dustdar, and F. Rosenberg, “A survey on context-aware systems,” International Journal of Ad Hoc and Ubiquitous Computing, vol. 2, no. 4, pp. 263–277, 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. R. M. White, “A sensor classification scheme,” IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 34, no. 2, pp. 125–127, 1987. View at Scopus
  16. T. J. Dishongh and M. McGrath, Wireless Sensor Networks for Healthcare Applications, Artech House, London, UK, 2010.
  17. H. K. Kim, “Design and implementation of sensor framework for U-Healthcare Services,” Computer and Information Science, vol. 364, pp. 253–261, 2011.
  18. E. M. Tapia, S. S. Intille, and K. Larson, “Activity recognition in the home using simple and ubiquitous sensors,” Pervasive Computing, vol. 3001, pp. 158–175, 2004.
  19. WEKA (Waikato Environment for Knowledge Analysis), http://www.cs.waikato.ac.nz/ml/weka/.
  20. Eclipse (IDE), http://www.eclipse.org/.
  21. M. A. Ayu, T. Mantoro, A. F. Matin, and A. S. S. Basamh, “Recognizing user activity based on accelerometer data from a mobile phone,” in Proceedings of the IEEE Symposium on Computers and Informatics, pp. 617–621, 2011.
  22. Mobilizing, http://fdm.ensad.fr/wiki/doku.php/en/reference.
  23. I. H. Witten and E. Frank, Data Mining Practical Machine Learning Tools and Techniques, Morgan Kaufmann, San Francisco, Calif, USA, 2005.