Journal of Sensors

Journal of Sensors / 2016 / Article
Special Issue

Healthcare Sensors for Daily Life

View this Special Issue

Review Article | Open Access

Volume 2016 |Article ID 6931789 |

Gregory Koshmak, Amy Loutfi, Maria Linden, "Challenges and Issues in Multisensor Fusion Approach for Fall Detection: Review Paper", Journal of Sensors, vol. 2016, Article ID 6931789, 12 pages, 2016.

Challenges and Issues in Multisensor Fusion Approach for Fall Detection: Review Paper

Academic Editor: Toshiyo Tamura
Received14 May 2015
Revised23 Jul 2015
Accepted05 Aug 2015
Published06 Dec 2015


Emergency situations associated with falls are a serious concern for an aging society. Yet following the recent development within ICT, a significant number of solutions have been proposed to track body movement and detect falls using various sensor technologies, thereby facilitating fall detection and in some cases prevention. A number of recent reviews on fall detection methods using ICT technologies have emerged in the literature and an increasingly popular approach considers combining information from several sensor sources to assess falls. The aim of this paper is to review in detail the subfield of fall detection techniques that explicitly considers the use of multisensor fusion based methods to assess and determine falls. The paper highlights key differences between the single sensor-based approach and a multifusion one. The paper also describes and categorizes the various systems used, provides information on the challenges of a multifusion approach, and finally discusses trends for future work.

1. Introduction

According to the latest United Nation statistic reports, the mean age of the population is expected to grow rapidly in developed countries within the next several decades [1]. This will subsequently increase the cost of the healthcare and result in significant loss for the national budgets. At the same time fall injury is considered to be one of the most common risks among an elderly population. The estimated fall incidence for both hospitalized and independently living people over the age of 75 is at least 30% every year. Close to half of nursing home residents experience falls each year, with 40% falling more than once [2]. These accidents can often have both physical [3] (often head and hip injury) and psychological [4] (fear of falling) consequences. Other serious issues associated with falls include unconsciousness after falling, recovery time due to fall related injury, and death, and many of these issues can be overcome by improving medical response level and rescue time.

The recent development in information and communication technology has triggered an intensive research effort towards detection and prevention of emergency situation associated with falls. This area is commonly considered as a part of Ambient Assisted Living (AAL) community which is a multidisciplinary field exploiting ICT in personal healthcare for countering the effects of aging population [5]. Modern AAL systems can also help to promote independent lifestyle for elderly people with multiple chronic disease in a situation of rapidly increasing healthcare costs [6] and assist in the task of prevention.

Commonly, fall detection systems are categorized into three different classes depending on the deployed sensor technology which includes wearable devices, ambient sensors, and vision-based sensors. The article by Noury et al. [7] from 2007 which contains description of systems, algorithms, and sensors for automatic fall detection can be considered as one of the first surveys in the field. Relatively recent status is described in publications by Mubashir et al. [8] and Igual et al. [9] providing valuable knowledge about principles, trends, issues, and challenges in fall detection area. As the number of contributions continued to expand some authors prefer to review a specific category within the field, that is, article by Bagalà et al. [10], specifically evaluating worn sensors-based fall detection methods. In publication by Otanasap and Boonbrahm [11], the focus is made on computer vision exploiting various processing techniques to analyze critical phase and postfall phase. However, a recent trend is characterized by combination of different data sources, which are processed by a single multisensor fusion algorithm [12]. This novel approach can potentially provide a significant improvement in reliability and specificity of fall detection system but has never been reviewed before.

In this paper we present a systematic survey of fall detection research with focus on using multisensor fusion as a main method. Our aim is to provide a general insight into this novel approach and show its benefits compared to other methods in this emerging area. Unlike techniques with a single source channel, a multifusion approach exploits a combination of unrelated devices which are later fused on a data processing level. An explicit search was conducted deploying major databases like Google Scholar, IEEE, PubMed, and Mendeley with keywords including multisensor fusion, sensor combination, context-aware fall detection, and wearable fall detection. In total 299 related publications were found; 68 of them were sorted out based on relevance and representation and included in the final version of the paper. All the reviewed studies are categorized depending on sensors used for monitoring: combination of wearable/ambient devices, only wearable or only ambient. Cases when monitoring is based on sensors both from the same category and of the same type (i.e., multiple cameras) are not considered. We also give an assessment to this novel approach and discuss its perspective in the nearest future.

The rest of the paper is organized as follows. in Section 2 we provide a general definition of the fall and describe its major characteristics. Section 3 gives a brief overview of the popular trends and approaches in modern fall detection together with major benefits and challenges. We proceed with detailed survey on publications deploying multisensor fusion which is followed by discussion of the presented approach and its future perspective in Section 4.

In the following section we will demonstrate complexity of the falling process, define various types of falls, and discuss several main characteristics which constitute a fall. A fall is commonly defined as “unintentionally coming to rest on the ground, floor, or other lower level.” Losing the balance and subsequent falling with the help of an assistant are also considered as a fall [13]. Based on possible scenarios 4 main types of falls can be distinguished: (1) fall from sleeping, (2) fall from sitting, (3) fall from walking/standing, and (4) fall from standing on support tools such as ladder. Each type has its own unique characteristics, which can help developers to adapt fall detector platforms to a wider spectrum of user requirements. According to the recent studies falls are more likely to occur inside patients’ room and in the bathroom or toilet during activities such as moving/transferring and showering/toileting [13, 14]. Weight, size, and corpulence of the person have also a substantial impact while determining the cinematic of falling. The majority of patients in the risk group usually fall in the evening or at night. Therefore, falls databases are very limited due to the lack of records made in real-life testing [15].

Fall detecting techniques can be categorized into three different generations: first-generation systems that rely on the user to detect the fall, second-generation systems that are based on the first-generation systems but have an embedded level of intelligence, and third-generation systems that use data, often via ambient monitoring systems, to detect changes (e.g., changes in activity levels) which may increase the risk of falling (or risk factors for other negative events). The third-generation systems are more preemptive rather than reactive approach [16]. We will mostly focus on fall detectors from the second and third categories, which are discussed in terms of sensor fusion applicability in Section 3. Typically all the modern fall detection systems can be split into 3 main classes (see Figure 1) depending on the sensor technology deployed for monitoring: wearable sensors, ambient sensors, and vision-based sensors [17].

A vast majority of recently developed fall detection systems operate based on one general framework including (1) data acquisition, (2) data processing/feature extraction, (3) fall detection, and (4) caregiver notification. This framework can vary depending on number of devices involved in the monitoring, communication protocols for alarm delivery, and the end user, who is responsible for taking actions in case of emergency. In wearable sensor based systems data acquisition is often performed by using an accelerometer (sensing changes in orientation of wearable device), a gyroscope (which detects angular momentum), and/or other types of sensors like barometers, magnetometers, or microphones. An ambient sensor approach often includes infrared sensors, vibration, or acoustic sensors. The first type can locate and track thermal target within a sensor’s field of view [18]. Vibration sensors are able to differentiate vibration patterns acquired from Activity of Daily Living (ADL) and falls; meanwhile, acoustic sensors use loudness and height of the sound to recognize the fall. Unlike wearable sensor techniques, this approach is considered to be the least obtrusive as it implies minimum interaction with the patient [19]. The last fall detection method performs data acquisition via a set of video cameras embedded into monitoring environment [20]. Vision-based systems can carry out inactivity detection and analyze the body shape changes or 3D head motion. They provide an unobtrusive way to monitor the person of interest and rapidly decrease in price [21]. Several studies managed to achieve significant results in reducing positive false alarms while using single sensor technology representing each of the categories. However, performance of the comprehensive approach indicates a significant raise in efficiency keeping reliability value over 90% (see Table 1). Therefore, a combination approach is among the latest trends in fall detection/posture recognition studies listed by Augustyniak et al. [22]:(i)building sensor networks instead of focusing on a sensor set for a particular disease,(ii)promoting multipurpose health prediction and prevention instead of monitoring patients with known medical records,(iii)designing monitoring process based on patients health conditions, habits, and life-style,(iv)unconstrained mobility of the monitored person,(v)real-time fusion and cooperation of ambient and wearable sensor networks.


3D vision 80.0%97.3%
Integrated system 94.3%90.9%

Preliminary results [23] demonstrate significant improvement of fall detection system performance after deploying several sensor functionalities in one system. It can help to improve system performance and provide significant reduction of false positive alarm rate.

Assuming the overall complexity of the fall kinematics and diversity of fall characteristics, described in Section 2, we believe that a multisensor fusion approach is likely to become widely used in the fall detection area. Moreover, there is a strong demand in high standard of independent living for elderly people [30] and therefore particular focus should be on the unobtrusiveness of such systems. Single sensors-based systems are sometimes characterized by a low reliability rate or can only detect particular types of fall in specific environments or circumstances. In the following section we give a brief description of multisensor data fusion method, describe its adaption for fall detection area, and suggest possible classification of main approaches.

3. Sensor Fusion in Fall Detection

Multisensor data fusion is a technology to enable combining information from several sources in order to form a unified picture [31]. Systems based on data fusion are now successfully exploited in various areas including sensor networks [32], image processing, and healthcare [33], where they demonstrate enhanced performance in terms of accuracy and reliability compared to single source based systems [34]. Modern healthcare systems commonly deploy data fusion algorithm to avoid intrinsic ambiguities caused by exploitation of unrelated type of sensors. In study by Medjahed and Istrate [35] tele-monitoring system is proposed to integrate physiological and behavior data, the acoustic environment of the patient, and medical knowledge. In this case data fusion approach is based on fuzzy logic with a set of rules corresponding to medical recommendations and proved to increase the reliability of the whole system by detecting several distress situations. Another example of data fusion in healthcare is proposed in article by Yang and Huang [36], where Kinect and color cameras are combined together to perform human tracking and identification. Begum et al. [37] make an attempt to classify “stressed” and “relaxed” individuals fusing data from various physiological sensors, that is, Heart Rate, Finger Temperature, Respiration Rate, Carbon Dioxide, and Oxygen Saturation. In this case fusion algorithm performed on decision and data level is additionally combined with Case-Based Reasoning for further classification. The experimental results demonstrate an increased accuracy in comparison with an expert in the domain.

As it was previously mentioned in Section 2 fall detection systems based on single sensor technology are often lacking sufficient accuracy rates and require additional work to improve reliability. For instance, both ambient and video type of frameworks commonly have a constrained monitoring area and require installation, adjustment, and maintenance which can result in higher costs. At the same time, wearable sensors have issues including a certain level of unobtrusiveness if the users have to wear them under long period of time. Additionally, information collected during the monitoring process is communicated via wireless channels which are not completely reliable. Taking into account mentioned observations, we believe the main benefit of fusion approach is its flexibility in terms of changing environment and potential demands of the patient/user. Multisensor based systems can be easily adjusted to the current monitoring instance (indoor/outdoor scenario), provide a better insight into elderly falling problem (additional data sources), and initiate fall prevention analyses. In this case, continuous data collected from multiple sources can be analyzed for reoccurring patterns as suggested in our previous publication [38]. Multisensor fusion has proved its efficiency in various areas of the healthcare domain [37] and subsequently gained its popularity in fall detection domain. Moreover, with a recent development on ICT market more sensors are now available and can be combined to perform advanced level of activity tracking, which will increase number of publications.

In Section 2 fall detection methods were classified into three main categories based on different types of sensor technology: wearable, ambient, and vision-based (see Figure 2). According to the vast majority of recent publications within fall detection domain same types of sensors are involved in multisensor fusion process with 2 major exceptions: (1) ambient and vision-based sensors are both integrated into environment and can be considered as a unified context-aware category and (2) wearable devices can be combined together with context-aware sensors comprising additional category. Assuming these corrections we propose an alternative approach to classify all fusion systems operating in the fall detection domain. Unlike single based methods, the choice of category does not depend on utilized sensor technology but corresponds to a sensor type which is being fused: context-aware sensors, wearable sensors, and combination between context and wearable. In the rest of the paper we review each category, present the most significant studies in multifusion fall detection domain, and discuss its possible challenges and limitations.

3.1. Context-Aware Sensors Fusion

According to the recent review on fall detection methods, most of the systems which use a multimodal approach are wearable sensor oriented and exploit 3-axial accelerometers as a part of the process [39]. However, there is a number of works providing solutions that are excluding wearable sensors from the monitoring and fall detection in particular. These types of systems are effective when unobtrusiveness is the main requirement and patient rejects to wear any external devices on his/her body. They can detect persons movements and collect information regarding the usage of furniture or household items and answer questions regarding the patients activity: that is, “is the patient eating/exercising regularly?” [40]. At he same time their operation capabilities are highly limited by the area of distribution.

Typically sensors involved in the context-aware monitoring are represented by cameras [24], vibration sensors [27], sound detector [26], pressure mats [29], and floor or infrared sensors [28]. Table 2 gives an overview of the most significant studies in this area. All the works are compared based on publication year, sensors involved in the monitoring process, multifusion algorithm, experimental part, and evaluation results depending on their availability.

ArticleYearBasis Deployed sensors Algorithm deployed EvaluationPerformance

Brulin and Courtial [24] 2010 Fusion system architecture for fall detection PIR, camera, thermopiles Fuzzy logic + combination of location/posture duration 15 video sequences recorded in health smart home Motion detection: 84%

Huang et al. [25] 2008 Intelligent cane fall detection based on sensor fusion Laser range finder, CCD camera Probability distribution function with relevant parameter, rule-based approach Normal walking/fall detection experiments with cane robots Effectiveness is confirmed through experiments

Zigel et al. [26] 2009 Fall detection based on detection of vibration and sound signals Accelerometer, microphone Feature extraction, Bayes decision rule classifier Mimicking doll “Rescue Randy,” 40 drops. Other objects: 80 drops SE (sensibility): 97.5%
SP (specificity): 98.6%

Yazar et al. [27] 2014 Multisensor system for fall detection Vibration sensor, PIR sensors Winner-takes-all (WTA) decision fusion algorithm Demo including
falling person, human footstep, human motion, unusual inactivity detection
No data is provided

Toreyin et al. [28] 2008Fall detection using multisensor signal processing Infrared, sound sensors Hidden Markov Models 2 minutes of walking falling and speech sounds generation All falls are detected correctly

Ariani et al. [29] 2012 Unobtrusive falls detection with multiple persons PIR and motion detector, pressure mats Decision tree algorithm 3 ADL scenarios
12 types of falls
SE: 100%
SP: 77.14%. Accuracy: 89.33%

Li et al. [21] 2013 Improvement of acoustic fall detection using Kinect depth sensingFADE (acoustic) Kinect Segmentation, thresholding Recorded video data Error reduction by 80%

Unlike single sensor-based approach, where feature extraction is followed by data classification, multisensor systems perform independent data analyses for each sensor technology with fusion method as a final step in fall detection [41]. Variation of the multisensor fusion techniques in each category including context-aware systems is highly dependent on sensors deployed in each study. In the early study from 2008 Huang et al. [25] propose a new human fall detection method based on fusing sensory information from a vision system and a laser ranger finder (LRF). In order to obtain data fusion from two sensors, unrelated types of measurements are integrated into the image coordinate with a focus on the distance between the head and the center of two legs. Finally, the actual fall detection is based on probability distribution function (PDF) and simple rule approach. Another interesting solution is suggested by Zigel et al. [26] where microphone used to track the sound is combined with accelerometer, which is capturing floor vibrations after patient falls. In this case feature extraction and Bayes decision rule classifier provide information fusion. Camera systems are used in several studies and accompanied either by acoustic sensors [21] or by PIR sensors together with thermopiles [24]. Thresholding approach after preliminary segmentation and fuzzy logic combined with location/posture direction are used as fusion techniques, respectively. Ariani et al. [29] are using wireless ambient sensors (motion detectors and pressure mats) to track the movement of multiple persons and later apply decision tree algorithm to unobtrusively detect falls when they occur. Another fall detection system consisting of vibration sensor and two PIR sensors is primarily based on winner-take-all (WTA) decision fusion algorithm [27], which is activated after preliminary processing of measurements collected from both sources. Finally Hidden Markov Model (HMM) is deployed in [28] to combine infrared and sound sensors. The most popular sensor deployed by context-aware systems is PIR type mentioned in 3 different publications. At the same time, no preference was given to any specific algorithm applied for fusion analyses.

Initial sensor setup and preliminary processing play an essential role in subsequent evaluation of the system. Brulin and Courtial [24] deployed a Health Smart Phone and recorded 15 video sequences illustrating situations of everyday life or an emergency performed by two subjects. In another example experimental part is split into two related steps. First, the possibility distribution of “normal walking” is investigated and finally the validation of the fall detection method is performed. Other examples include dropping of “Rescue Randy” doll, falling and speech sound generation, ADL, and falls simulations. At this point, variability of trial approaches indicates an absence of common strategy for evaluation of the context-aware multisensor fusion systems. Due to the high variability of devices deployed for multisensor it becomes complicated to analyze and unify all the methods involved in the process or determine the most reliable one. Further investigation and experimental work are required.

Moreover, it is important to mention that none of the analyzed research works managed to perform experiments with elderly people, a group which is potentially in high risk of falling. This can be explained by patients privacy, which is still a sensitive issue in terms of ambient and especially vision-based fall detection systems. For historic reference, one of the first projects in the area was forced to shift from image processing to body placed sensors due to privacy concerns [42]. We believe this problem can be partly solved by alternate deployment of camera-based and ambient sensors depending on location, change in environment, or current emergency situation. In this way monitoring that violates patients privacy is only performed when the calculated risk of falling is significantly higher.

3.2. Wearable Sensors Fusion

With the recent development on ICT market, wearable devices start to play an essential role in modern healthcare systems. Most of them have already been utilized for automatic fall detection and showed its efficiency compared to other methods [5759]. Unlike context-aware sensor technology, wearables attached to a patient’s body do not affect their privacy and can therefore perform monitoring on extended periods of time. In most of the cases fall detection systems based on fusing wearable sensors include accelerometer device as a main source of data. They are often complemented by other types of wearables, that is, gyroscopes and magnetometer [42], location tags [45], or barometric pressure sensors [44, 47, 48] (see Table 3). Moreover, physiological devices combined with accelerometers can be considered as a separate subgroup due to specific synchronization requirements and processing of collected measurements. For example, Yi et al. [46] deploy temperature sensor and ECG together with accelerometer and perform individual data processing for each device later fused into a unified alert message for medical staff.

ArticleYearBasisDeployed sensorsDeployed algorithmEvaluationPerformance

Felisberto et al. [42] 2014 Movement monitoring, accident detection based on sensor fusion Accelerometer, gyroscope, magnetometer Fuzzy logic + extended Kalman filter, direct cosine matrix (DCM), control algorithmMovement state, Orientation state experiment with precollected data Passing average: 84%

Doukas and Maglogiannis [43] 2008 Fall detection based on movement/sound data Accelerometer, microphones Support Vector Machine (SVM) 2 volunteers:
(a) Simple walk
(b) Walk and fall
(c) Walk and run
All fall events successfully detected Run events: 96.72%

Bianchi et al. [44] 2010 Falls event detection with barometric pressure and triaxial accelerometer Accelerometer, air pressure sensor Heuristically trained decision tree classifier 20 healthy volunteers: falls/ADL simulation Accuracy: 96.9% Sensitivity: 97.5% Specificity: 96.5%

Lustrek et al. [45] 2011 Fall detection with accelerometer and location sensor Accelerometer, location tags Rule-based reasoning 10 healthy volunteers, specific scenario Methods utilized both context/accelerometer. Accuracy increase: 40%

Yi et al. [46] 2014 Wearable sensor data fusion for fall detection Temperature, accelerometer ECG sensor Data is processed individually and combined into alert message No evaluation provided Human postures successfully recognized. Full evaluation is not performed

Tolkiehn et al. [47] 2011 Fall detection with accelerometer and barometric pressure sensor Accelerometer, barometric pressure sensor Feature extraction, thresholding combination 12 healthy volunteers ADL/fall simulation, 297 data sequences Fall identification accuracy: 94.12%

Greene et al. [48] 2012 Falls risk estimation through multisensor assessment of standing balance Pressure sensor (platform), body-worn inertial sensor SVM 120 community dwelling older adults Classification accuracy: 71.52%

Similar to the previous category described in Section 3.1, multisensor fusion algorithms applied for wearables can vary depending on specific device or authors choice. Three different techniques were deployed to combine body-worn inertial sensors and air pressure sensors including heuristically trained decision tree classifier, feature extraction/thresholding, and SVM. According to evaluation results, decision tree and feature extraction/thresholding are more efficient with 96.9% and 94.12% of accuracy. However, unlike studies in [44, 47], Greene et al. [48] perform experimental part with older adults which significantly affect the final result (see Table 3). In study by Felisberto et al. [42] a mash-up of various methods including fuzzy logic, extended Kalmar filter (EKF), direct cosine matrix (DCM), and control algorithm is applied in order to fuse accelerometer, gyroscope, and magnetometer. Fall detection based on movement and sound data is performed by Doukas et al. [43, 56], where accelerometer is deployed together with microphones and collected data is fused by Support Vector Machine technique. An accuracy increase by 40% was demonstrated in [45] after accelerometer sensor was combined with location tag by rule-based reasoning. However, for a final discussion regarding the positive results, experimental conditions should be taken into account.

In the vast majority of fall related studies evaluation process is mainly performed by healthy volunteers or based on simulation [38, 60, 61]. This fact makes it almost impossible to give an accurate assessment for operational capabilities of developed system or reliability of deployed algorithm. More experimental data received from elderly population should be analyzed in order to improve sufficiency of developed fall detection system. In study from 2012 Greene et al. [48] estimate the risk of falling through multisensor assessment of standing balance. Pressure sensitive platform and body-worn inertial sensor are utilized during evaluation, which is based on monitoring 120 community dwelling older adults. It is one of few research studies where trials with elderly population are included as evaluation criteria. As a result the overall performance of the system was significantly affected demonstrating only 71.52% of classification accuracy; meanwhile, the rest of the methods can reach 95%–97% for specificity, sensitivity, and accuracy. Fall detection systems based on wearable devices is still a novel method, therefore lacking a unified approach to effectively combine sensors due to different formats of collected data.

At the same time, additional number of digital devices attached to patient’s body are inconvenient for the users and can potentially lead to a low acceptance rate of this method. This issue can be solved if different types of wearable sensors are incorporated in a single device performing collection of unrelated types of data simultaneously. This will help reduce data loss, improve processing time, and at the same time maintain patients independent life-style without affecting their privacy. Modern mobile phones are already equipped with advanced sensor functionality and can be suggested as a tool for synchronization and processing of collected measurements. However, modern smartphones and gadgets are still poorly distributed among elderly people [62], which complicates deployment and further progress of the proposed methodology.

3.3. Wearable/Ambient Sensor Fusion

The last category is characterized by combination of previously presented approaches and can potentially help to detect a wider spectrum of possible emergency situations connected with falls. Context-aware fall systems can provide long-term trend analysis describing patients behavior and recognizing abnormal patterns but are often limited by the area in which they can be used and distribution. Wearable fall detection is becoming increasingly available due to cheap embedded sensors included in smartphones and demonstrates relatively high performance but still produces significant number of fall alarms [63, 64] and has been mainly tested in laboratory environments. As a result, research studies which make an attempt to merge major benefits of both approaches into a self-complementing system are surpassing other methods by a number of publications (see Tables 2 and 3). In Table 4 we review the most significant studies to demonstrate the latest trends in multisensor fusion for context-aware and wearable sensors.

Article Year Basis Deployed sensors Deployed algorithm Evaluation Performance

Aguilar et al. [49] 2014 Sensor fusion via evidential network for fall detection RFPAT [49], GARDIEN [49]Evidential network Dempster-Shafer Theory formalism Data recorded at Telecom SudParis SE: 94%

Cavalcante et al. [50] 2012 Evidential network for medical data fusion in remote monitoring Wearable sensor, infrared sensors, sound analyzer Dempster-Shafer Theory Data recorded at Telecom SudParis SE: 93.94%

Della Toffola et al. [51] 2011 Combine sensor networks and home robot to improve fall detection Body-worn sensors, ambient sensors, home robot Future work, flooding time synchronization protocol for nodes Packet transmission delays power consumption Built system suitable for fall detection

McIlwraith et al. [52] 2010 Wearable and ambient sensor fusion for human motion detection Accelerometer (e-AR), video sensors, gyroscope Spatial/temporal HMM 5 activities performed by volunteers in a constrained manner Accuracy increase: Over vision system: 6.4%
Over gyroscope: 17.2%

Kepski et al. [53] 2012 Fall detection using Kinect and accelerometer Kinect, accelerometer, gyroscope Fuzzy inference system Intentional falls and ADLs performed by 3 volunteers Fused sensors proved sufficiency to implement reliable fall detection system

Leone and Diraco [41] 2008 Multisensor approach for fall detection in home environment 3D camera, wearable acc-ter, microphone Multithreading approach with fuzzy logic technique under development 13 volunteers perform 450 events including 210 falls 3D vision alarm: 81.3%
Acc-ter alarm: 98%
Audio alarm: 83%

Alemdar et al. [54] 2010 Multimodal fall detection within WeCare framework Accelerometer, embedded cameras, RFID tags Decision fusion mechanism Volunteer performing ADL Falls are successfully distinguished from ADL

Cagnoni et al. [55] 2009 Fall detection for assisted technologies applications Accelerometer, video camera PSO for visual data HTM for acceleration fusion algorithms multiple classifier sets fuzzy logic decision trees Accelerometer: continuous flow of real-life events simulation
Video sensor: limited set of image sequences
Joint system is guaranteed to provide a good level of fault-tolerance

Doukas and Maglogiannis [56] 2011 Fall detection utilizing motion, sound, and visual perceptual componentsTracking camera, accelerometer, microphones Semantic rules based on semantic web rule language (SWRL) 2 male volunteers performing experimental protocol Utilization of rules-based evaluation minimizes false positives to zero

This approach is considered relatively new and therefore requires thorough investigation and experimental work. As a result, the choice of sensors can vary significantly from one study to another. Most of the systems deploy accelerometers as a main device which are additionally combined with either ambient sensors or 3D cameras [65, 66]. Other wearable devices can be represented by gyroscopes, microphones, physiological sensors, sound analyzer, infrared sensors, or RFID tags. Della Toffola et al. [51] in their study from 2011 pick up a different approach and complement a set of ambient and body-worn sensors with a home robot in order to improve fall detection. Slightly different concept is presented by McIlwraith et al. [52] and Kepski et al. [53] where accelerometer and gyroscope are accompanied by vision-based sensors. In the first publication surrounding vision sensors are deployed for accurate characterization of motion, and in the second case authors used commercially available microsoft Kinect camera instead and performed reliable fall detection. In some cases gyroscope can be replaced by microphones [43] or alternatively by RFID tags [63], with embedded tracking camera and accelerometer still being part of the framework.

Due to high diversity in sensor technology deployed for fall detection, the choice of algorithms performing fusion function is still unique in each research work. The most common approach to combine wearable and context-aware systems includes individual low-complexity algorithms for every sensor technology, which are then followed by more advanced fusion algorithm. None of the reviewed studies deployed thresholding technique on individual or fusion level and the only example of rule-based approach was complicated by semantic web rule language. The most popular algorithm is fuzzy logic utilized as fuzzy inference system [53] or fuzzy logic decision tree [55]. Other methods include evidential networks, Dempster-Shafer theory, or Hidden Markov Models. Similar to the previous categories, there is no possibility to determine a common approach or justify the choice of fusion methods since there is not enough experimental evidence to operate with.

Similar to previous categories, variation in sensors and methods deployed for multimodal fusion has a significant effect on experimental part of research. The evaluation process can be characterized by two different scenarios: (1) online testing with volunteers subsequently performing ADL or falls and (2) offline evaluation utilizing previously collected measurements. In both cases, combination of wearable and context-aware approaches had a positive impact and resulted in increased specificity, sensitivity, and accuracy. Doukas and Maglogiannis [56] in their attempt to merge tracking camera, accelerometer, and microphones managed to minimize the amount of fall positive alarms to zero. Evaluation based on elderly patients in real home-like environments is still a sensitive issue, assuming complexity of the sensor setup in this case.

In our previous studies we proposed a multisensor fusion system based on Dynamic Bayesian Networks and combined wearable device with context-aware sensor framework [67]. All the accelerometer measurements were obtained from the android based smartphone and analyzed for possible falls. Context-aware information was obtained from environmental sensors network consisting of PIR motion, door contact, pressure mats, and power usage detectors embedded into a smart home and deploying a special context recognition algorithm to deliver user activities. Physiological data was later interfered with ambient measurements and processed in Dynamic Bayesian Network performing fall detection. Evaluation process contains both simulation (MATLAB tools) and demonstration part (healthy volunteer). With the proposed technique we managed to compliment 2 different fall detection approaches and improve the reliability of the fall detection system. However, it is still far from deployment of developed or similar systems in everyday geriatric practice or explicit examples of commercially successful applications. Moreover, the vast majority of similar systems obtain high experimental results in unrealistic or restricted conditions with a pure reference to real-life environments, which is among the issues of this approach. Other challenges and limitations of multisensor fusion method in fall detection are discussed in Section 4 of the review.

4. Discussion

4.1. Challenges

Most of the challenges specific for modern single-based fall detection systems are still valid in case of multimodal approach. Igual et al. [9] provide a number of typical problems which can affect final results including (1) lack of performance under real-life conditions, (2) limited usability (which mostly applies to wearable and smartphone-based fall detectors), and (3) lack of publications regarding practicality and acceptability of modern fall detection technologies. Other suggested issues are connected with privacy concerns, lack of human contact, and limited experimental conditions.

After including additional sensor functionality a single-based fall detector becomes a multimodal system inheriting challenges typical to other frameworks with data fusion requirement. Khaleghi et al. [31] introduced these issues in their study starting with imperfection of the collected data and diversity or low reliability of sensor technologies. Based on reviewed material we can complement the list of challenges in data fusion for fall detection with the issues listed below. All these items should be analyzed and taken into consideration before developing the fall detection framework.

4.1.1. Cost Efficiency

As previously mentioned in Section 3.3 multisensor fusion helps to improve reliability of fall detection system. At the same time additional medical devices can significantly increase the final cost of the monitoring framework. In this case cost efficiency assessment becomes an essential part of evaluation process. Our recommendation is to create a flexible structure which will allow us to adjust the number of components depending on individual contribution to the overall performance of the system.

4.1.2. Conflicting Output

During the monitoring process similar activities can be interpreted in a different way by unrelated sensor platforms. The amount of false alarms among modern fall detections is still relatively high. Therefore, it is essential to give a priority to the technology which is more reliable and can minimize unclassified falls or ADLs.

4.1.3. Data Correlation

Measurements collected during the monitoring process in case of multisensor fall detection are typically coming from different backgrounds and are unrelated to each other. These data should not only be merged together in a most efficient way, but also be analyzed for possible common trends and similarities.

4.1.4. Processing Framework

Firstly, majority of the systems analyze data for each component independently and deploy fusion algorithm as a final step to combine acquired results [33]. However, in some cases raw data collected from each sensor unit can be delivered to the common framework without preliminary processing. Alternatively, in case of wearable and context-aware fusion, particular categories can be processed in conjunction (i.e., various types of ambient sensors) and later fused with sensors from unrelated category. As a result, it leads to unnecessary complication of the fusion algorithm and subsequent increase in computational time.

4.1.5. Computational Power

Multiple monitoring items result in additional amount of data collected by the system and will subsequently increase computational costs. This issue can be avoided by separating data analyses into several stages including preprocessing, data filtering, and feature extraction. Each particular type of processing can be performed by a separate component with a processing center, where the final decision is made.

Another drawback, which is particularly specific for modern multisensor fusion systems, is a lack of simplified evaluation procedure. In a vast majority of articles evaluation method is often based on simulations or this information is not available at all. It is partly caused by complexity of the monitoring setup in real environment. Sensor functionality should be embedded into the regular apartment or specially designed test environment. Moreover, similarly to regular fall detection systems fusion based methods are evaluated on simulated falls performed by healthy volunteers, which is far from the real-life scenario. Testing with real patients who suffer from falling can help to improve the process; however, it requires ethical content and additional complications and is commonly not available in fall detection studies. Additional complexity is caused by distinctive technological background of the sensor technology involved in the monitoring process. This issue is specific to any fusion based system and becomes essential when developing the multisensor fall detection mechanism.

4.2. Future Trends

Based on the majority of reviewed papers the main trend in multisensor fall detection can be characterized by merging sensor technologies from different categories and unrelated platforms. Systems developed with this approach are fully interchangeable and can maintain monitoring even when one of the components is inactive.

4.2.1. Physiological Sensors

Most of the elderly patients suffer from various health problems including heart problem or Alzheimer which increases probability of falling in their daily life. Therefore, it is important to track patients activity in conjunction with significant physiological parameters. Physiological sensors combined with fall detectors can help to understand correlation between patients activity and health conditions and make monitoring process more detailed.

4.2.2. Long-Term Analyses

Monitoring people with high risk of falling on a regular basis during the long period of time will improve data analyses and help to detect interesting patterns. In perspective we will be able to develop an algorithm which can prevent the fall in case dangerous measurement sequence is repeating itself in time.

4.2.3. Integration into Smart Home Environments

Long-term analysis is almost impossible without an appropriate sensor setup. In many cases sensors are already integrated into everyday routine in form of smart home environments collecting valuable information regarding user’s presence in the house. They can be further adopted for patient medical tracking and reliable fall detection without additional installation costs.

4.2.4. Patient-Oriented Systems

Assuming the individual approach in patient treatment most of the multimodal healthcare systems should be more patient-oriented. The choice of sensors and processing techniques should correspond to the actual patient demands and major health problems they are suffering from. Otherwise, developed platforms should cover a wide spectrum of healthcare problems or be as much universal as possible.

Due to complexity of falls and variation in falling circumstances the most effective approach implies fusing information from sensors related to different categories. As a step towards a full-scale remote monitoring framework, fall detection components can be deployed in conjunction with other healthcare systems to check patients well-being on a long-term basis. Following the recent trend, we suggest building a special environment with wearable, ambient, and vision sensors, where fusion techniques can be effectively evaluated. At the same time, it is recommended to complement these types of smart environments with additional sensor technology only based on current patients’ demand or particular monitoring case in order to avoid data overload and unnecessary privacy violations.

5. Conclusion

Fall detection systems play an essential role in modern healthcare. Latest sensor technologies are deployed in order to distinguish between falls and regular ADLs with a recent trend to combine unrelated data sources. In the presented study we conducted a search among the latest works based on multisensor fall detection systems and made an attempt to classify all systems into various categories. Analyzed materials allowed us to start a useful discussion regarding major challenges faced by multifusion approach, its issues, and limitations. Based on this discussion we can suggest core topics that should be considered in fusion methodology in the future. Among other things we would like to make a special focus on (1) developing a multifunctional monitoring platform, where each component/sensor can be easily adjusted or removed depending on user demand or monitoring circumstances and (2) organizing continuous monitoring/experimental sessions involving elderly population in order to improve acceptability of fall detection systems. Both suggestions will introduce a certain level of structure to this novel but rapidly evolving approach and help to unify the choice of algorithm in each particular monitoring case.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


  1. Department of Economic Population Division and United Nations Social Affairs, World Population Ageing, 2009.
  2. J. Dai, X. Bai, Z. Yang, Z. Shen, and D. Xuan, “Mobile phone-based pervasive fall detection,” Personal and Ubiquitous Computing, vol. 14, no. 7, pp. 633–643, 2010. View at: Publisher Site | Google Scholar
  3. S. Sadigh, A. Reimers, R. Andersson, and L. Laflamme, “Falls and fall-related injuries among the elderly: a survey of residential-care facilities in a Swedish municipality,” Journal of Community Health, vol. 29, no. 2, pp. 129–140, 2004. View at: Publisher Site | Google Scholar
  4. B. J. Vellas, S. J. Wayne, L. J. Romero, R. N. Baumgartner, and P. J. Garry, “Fear of falling and restriction of mobility in elderly fallers,” Age and Ageing, vol. 26, no. 3, pp. 189–193, 1997. View at: Publisher Site | Google Scholar
  5. M. Memon, S. R. Wagner, C. F. Pedersen, F. H. Aysha Beevi, and F. O. Hansen, “Ambient Assisted Living healthcare frameworks, platforms, standards, and quality attributes,” Sensors, vol. 14, no. 3, pp. 4312–4341, 2014. View at: Publisher Site | Google Scholar
  6. P. Rashidi and A. Mihailidis, “A survey on ambient-assisted living tools for older adults,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 3, pp. 579–590, 2013. View at: Publisher Site | Google Scholar
  7. N. Noury, A. Fleury, P. Rumeau et al., “Fall detection—principles and methods,” in Proceedings of the 29th Annual International Conference of IEEE-EMBS Engineering in Medicine and Biology Society (EMBS '07), pp. 1663–1666, August 2007. View at: Publisher Site | Google Scholar
  8. M. Mubashir, L. Shao, and L. Seed, “A survey on fall detection: principles and approaches,” Neurocomputing, vol. 100, pp. 144–152, 2013. View at: Publisher Site | Google Scholar
  9. R. Igual, C. Medrano, and I. Plaza, “Challenges, issues and trends in fall detection systems,” BioMedical Engineering Online, vol. 12, no. 1, article 66, 2013. View at: Publisher Site | Google Scholar
  10. F. Bagalà, C. Becker, A. Cappello et al., “Evaluation of accelerometer-based fall detection algorithms on real-world falls,” PLoS ONE, vol. 7, no. 5, Article ID e37062, 2012. View at: Publisher Site | Google Scholar
  11. N. Otanasap and P. Boonbrahm, “Fall prevention using head velocity extracted from visual based VDO sequences,” in Proceedings of the 5th Augmented Human International Conference, 2012. View at: Publisher Site | Google Scholar
  12. I. Iliev, S. Tabakov, and V. Spasova, “Multipoint video control and fall detection system applicable in assistance of the elderly and people with disabilities,” International Journal of Reasoning-based Intelligent Systems, vol. 6, no. 1-2, pp. 34–39, 2014. View at: Publisher Site | Google Scholar
  13. X.-L. Chen, Y.-H. Liu, D. K. Y. Chan, Q. Shen, and H. van Nguyen, “Characteristics associated with falls among the elderly within aged care wards in a tertiary hospital: a retrospective case-control study,” Chinese Medical Journal, vol. 123, no. 13, pp. 1668–1672, 2010. View at: Publisher Site | Google Scholar
  14. B. Ni, C. D. Nguyen, and P. Moulin, “RGBD-camera based get-up event detection for hospital fall prevention,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '12), pp. 1405–1408, March 2012. View at: Publisher Site | Google Scholar
  15. H. J. B. Dan Istrate, J. M. B. Dorizzi, M. Cesar, T. M. J. L. Baldinger, A. G. I. B. Paulo, and A. Cavalcante, “Evidential network-based multimodal fusion for fall detection,” in Proceedings of the 8th International Conference on Wearable Micro and Nano Technologies for Personalised Health, 2012. View at: Google Scholar
  16. G. Ward, N. Holliday, S. Fielden, and S. Williams, “Fall detectors: a review of the literature,” Journal of Assistive Technologies, vol. 6, no. 3, pp. 202–215, 2012. View at: Publisher Site | Google Scholar
  17. X. Yu, “Approaches and principles of fall detection for elderly and patient,” in Proceedings of the 10th International Conference on e-Health Networking, Applications and Services (HealthCom '08), pp. 42–47, July 2008. View at: Google Scholar
  18. F. Hijaz, N. Afzal, T. Ahmad, and O. Hasan, “Survey of fall detection and daily activity monitoring techniques,” in Proceedings of the 2nd International Conference on Information and Emerging Technologies (ICIET '10), pp. 1–6, June 2010. View at: Publisher Site | Google Scholar
  19. M. A. Habib, M. S. Mohktar, S. B. Kamaruzzaman, K. S. Lim, T. M. Pin, and F. Ibrahim, “Smartphone-based solutions for fall detection and prevention: challenges and open issues,” Sensors, vol. 14, no. 4, pp. 7181–7208, 2014. View at: Publisher Site | Google Scholar
  20. S. K. Tasoulis, C. N. Doukas, V. P. Plagianakos, and I. Maglogiannis, “Statistical data mining of streaming motion data for activity and fall recognition in assistive environments,” Neurocomputing, vol. 107, pp. 87–96, 2013. View at: Publisher Site | Google Scholar
  21. Y. Li, T. Banerjee, M. Popescu, and M. Skubic, “Improvement of acoustic fall detection using Kinect depth sensing,” in Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '13), pp. 6736–6739, IEEE, Osaka, Japan, July 2013. View at: Publisher Site | Google Scholar
  22. P. Augustyniak, M. Smoleń, Z. Mikrut, and E. Kańtoch, “Seamless tracing of human behavior using complementary wearable and house-embedded sensors,” Sensors, vol. 14, no. 5, pp. 7831–7856, 2014. View at: Publisher Site | Google Scholar
  23. M. Grassi, A. Lombardi, G. Rescio et al., “A hardware-software framework for high-reliability people fall detection,” in Proceedings of the IEEE Sensors (SENSORS '08), pp. 1328–1331, October 2009. View at: Publisher Site | Google Scholar
  24. D. Brulin and E. Courtial, “Multi-sensors data fusion system for fall detection,” in Proceedings of the 10th International Conference on Information Technology and Applications in Biomedicine (ITAB '10), pp. 1–4, IEEE, Corfu, Greece, November 2010. View at: Publisher Site | Google Scholar
  25. J. Huang, P. Di, K. Wakita, T. Fukuda, and K. Sekiyama, “Study of fall detection using intelligent cane based on sensor fusion,” in Proceedings of the International Symposium on Micro-NanoMechatronics and Human Science (MHS '08), pp. 495–500, 2008. View at: Google Scholar
  26. Y. Zigel, D. Litvak, and I. Gannot, “A method for automatic fall detection of elderly people using floor vibrations and sound Proof of concept on human mimicking doll falls,” IEEE Transactions on Biomedical Engineering, vol. 56, no. 12, pp. 2858–2867, 2009. View at: Publisher Site | Google Scholar
  27. A. Yazar, F. Erden, and A. E. Cetin, “Multi-sensor ambient assisted living system for fall detection,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '14), pp. 1–3, 2014. View at: Google Scholar
  28. B. U. Toreyin, E. B. Soyer, I. Onaran, and E. E. Cetin, “Falling person detection using multisensor signal processing,” EURASIP Journal on Advances in Signal Processing, vol. 2008, Article ID 149304, 2008. View at: Publisher Site | Google Scholar
  29. A. Ariani, S. J. Redmond, D. Chang, and N. H. Lovell, “Simulated unobtrusive falls detection with multiple persons,” IEEE Transactions on Biomedical Engineering, vol. 59, no. 12, pp. 3185–3196, 2012. View at: Publisher Site | Google Scholar
  30. R. Bragg, P. Catlin, J. Carlier, S. Brownsell, and D. A. Bradley, “Do users want telecare and can it be cost-effective,” in Proceedings of the 1st Joint BMES/EMBS Conference Serving Humanity, p. 714, October 1999. View at: Google Scholar
  31. B. Khaleghi, A. Khamis, F. O. Karray, and S. N. Razavi, “Multisensor data fusion: a review of the state-of-the-art,” Information Fusion, vol. 14, no. 1, pp. 28–44, 2013. View at: Publisher Site | Google Scholar
  32. D. L. Hall and J. Llinas, “An introduction to multisensor data fusion,” Proceedings of the IEEE, vol. 85, no. 1, pp. 6–23, 1997. View at: Publisher Site | Google Scholar
  33. K. Park and B. Lee, “Issues in data fusion for healthcare monitoring,” in Categories and Subject Descriptors, ACM, 2008. View at: Google Scholar
  34. G. Villarrubia, J. F. De Paz, J. Bajo, and J. M. Corchado, “Ambient agents: embedded agents for remote control and monitoring using the PANGEA platform,” Sensors, vol. 14, no. 8, pp. 13955–13979, 2014. View at: Publisher Site | Google Scholar
  35. H. Medjahed and D. Istrate, “Human activities of daily living recognition using fuzzy logic for elderly home monitoring,” in Proceedings of the IEEE International Conference on Fuzzy Systems, vol. 33, pp. 1466–1473, 2009. View at: Google Scholar
  36. M.-T. Yang and S.-Y. Huang, “Appearance-based multimodal human tracking and identification for healthcare in the digital home,” Sensors, vol. 14, no. 8, pp. 14253–14277, 2014. View at: Publisher Site | Google Scholar
  37. S. Begum, S. Barua, and M. U. Ahmed, “Physiological sensor signals classification for healthcare using sensor data fusion and case-based reasoning,” Sensors, vol. 14, no. 7, pp. 11770–11785, 2014. View at: Publisher Site | Google Scholar
  38. G. A. Koshmak, M. Linden, and A. Loutfi, “Evaluation of the android-based fall detection system with physiological data monitoring,” in Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1164–1168, Osaka, Japan, July 2013. View at: Publisher Site | Google Scholar
  39. J. T. Perry, S. Kellog, S. M. Vaidya, J.-H. Youn, H. Ali, and H. Sharif, “Survey and evaluation of real-time fall detection approaches,” in Proceedings of the 6th International Symposium on High Capacity Optical Networks and Enabling Technologies (HONET '09), pp. 158–164, December 2009. View at: Publisher Site | Google Scholar
  40. F. Pecora, M. Cirillo, F. Dell'Osa, J. Ullberg, and A. Saffiotti, “A constraint-based approach for proactive, context-aware human support,” Journal of Ambient Intelligence and Smart Environments, vol. 4, no. 4, pp. 347–367, 2012. View at: Publisher Site | Google Scholar
  41. A. Leone and G. Diraco, “A multi-sensor approach for people fall detection in home environment,” in Proceedings of the Workshop on Multi-Camera and Multi-Modal Sensor Fusion Algorithms and Applications, pp. 1–12, 2008. View at: Google Scholar
  42. F. Felisberto, F. Fdez-Riverola, and A. Pereira, “A ubiquitous and low-cost solution for movement monitoring and accident detection based on sensor fusion,” Sensors, vol. 14, no. 5, pp. 8961–8983, 2014. View at: Publisher Site | Google Scholar
  43. C. Doukas and I. Maglogiannis, “Advanced patient or elder fall detection based on movement and sound data,” in Proceedings of the 2nd International Conference on Pervasive Computing Technologies for Healthcare, pp. 103–107, IEEE, Tampere, Finland, February 2008. View at: Publisher Site | Google Scholar
  44. F. Bianchi, S. J. Redmond, M. R. Narayanan, S. Cerutti, and N. H. Lovell, “Barometric pressure and triaxial accelerometry-based falls event detection,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 18, no. 6, pp. 619–627, 2010. View at: Publisher Site | Google Scholar
  45. M. Lustrek, H. Gjoreski, S. Kozina, B. Cvetković, V. Mirchevska, and M. Gams, “Detecting falls with location sensors and accelerometers,” in Proceedings of the 23rd Innovative Applications of Artificial Intelligence Conference, pp. 1662–1667, August 2011. View at: Google Scholar
  46. W.-J. Yi, O. Sarkar, S. Mathavan, and J. Saniie, “Wearable sensor data fusion for remote health assessment and fall detection,” in Proceedings of the IEEE International Conference on Electro/Information Technology (EIT '14), pp. 303–307, June 2014. View at: Publisher Site | Google Scholar
  47. M. Tolkiehn, L. Atallah, B. Lo, and G.-Z. Yang, “Direction sensitive fall detection using a triaxial accelerometer and a barometric pressure sensor,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 369–372, Boston, Mass, USA, August 2011. View at: Publisher Site | Google Scholar
  48. B. R. Greene, D. McGrath, L. Walsh et al., “Quantitative falls risk estimation through multi-sensor assessment of standing balance,” Physiological Measurement, vol. 33, no. 12, pp. 2049–2063, 2012. View at: Publisher Site | Google Scholar
  49. P. A. C. Aguilar, J. Boudy, D. Istrate, B. Dorizzi, and J. C. M. Mota, “A dynamic evidential network for fall detection,” IEEE Journal of Biomedical and Health Informatics, vol. 18, no. 4, pp. 1103–1113, 2014. View at: Publisher Site | Google Scholar
  50. P. A. Cavalcante, M. A. Sehili, M. Herbin et al., “First steps in adaptation of an evidential network for data fusion in the framework of medical remote monitoring,” in Proceedings of the 34th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC '12), pp. 2044–2047, San Diego, Calif, USA, September 2012. View at: Publisher Site | Google Scholar
  51. L. Della Toffola, S. Patel, B.-R. Chen, Y. M. Ozsecen, A. Puiatti, and P. Bonato, “Development of a platform to combine sensor networks and home robots to improve fall detection in the home environment,” in Proceedings of the 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS '11), pp. 5331–5334, September 2011. View at: Publisher Site | Google Scholar
  52. D. McIlwraith, J. Pansiot, and G.-Z. Yang, “Wearable and ambient sensor fusion for the characterisation of human motion,” in Proceedings of the 23rd IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '10), pp. 5505–5510, IEEE, Taipei, Taiwan, October 2010. View at: Publisher Site | Google Scholar
  53. M. Kepski, B. Kwolek, and I. Austvoll, “Fuzzy inference-based reliable fall detection using kinect and accelerometer,” in Artificial Intelligence and Soft Computing, vol. 7267 of Lecture Notes in Computer Science, pp. 266–273, Springer, Berlin, Germany, 2012. View at: Publisher Site | Google Scholar
  54. H. Ö. Alemdar, G. R. Yavuz, M. O. Özen et al., “Multi-modal fall detection within the wecare framework,” in Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN '10), pp. 436–437, April 2010. View at: Publisher Site | Google Scholar
  55. S. Cagnoni, G. Matrella, M. Mordonini, F. Sassi, and L. Ascari, “Sensor fusion-oriented fall detection for assistive technologies applications,” in Proceedings of the 9th International Conference on Intelligent Systems Design and Applications, pp. 673–678, December 2009. View at: Publisher Site | Google Scholar
  56. C. N. Doukas and I. Maglogiannis, “Emergency fall incidents detection in assisted living environments utilizing motion, sound, and visual perceptual components,” IEEE Transactions on Information Technology in Biomedicine, vol. 15, no. 2, pp. 277–289, 2011. View at: Publisher Site | Google Scholar
  57. J. W. Zheng, Z. B. Zhang, T. H. Wu, and Y. Zhang, “A wearable mobihealth care system supporting real-time diagnosis and alarm,” Medical and Biological Engineering and Computing, vol. 45, no. 9, pp. 877–885, 2007. View at: Publisher Site | Google Scholar
  58. K. Arai, “Wearable physical and psychological health monitoring system,” in Proceedings of the Science and Information Conference (SAI '13), pp. 133–138, October 2013. View at: Google Scholar
  59. A. Lombardi, M. Ferri, G. Rescio, M. Grassi, and P. Malcovati, “Wearable wireless accelerometer with embedded fall-detection logic for multi-sensor ambient assisted living applications,” in Proceedings of the IEEE Sensors, pp. 1967–1970, October 2009. View at: Publisher Site | Google Scholar
  60. D. Naranjo-Hernández, L. M. Roa, J. Reina-Tosina, and M. A. Estudillo-Valderrama, “Personalization and adaptation to the medium and context in a fall detection system,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 2, pp. 264–271, 2012. View at: Publisher Site | Google Scholar
  61. L. Schwickert, C. Becker, U. Lindemann et al., “Fall detection with body-worn sensors,” Zeitschrift für Gerontologie und Geriatrie, vol. 46, no. 8, pp. 706–719, 2013. View at: Publisher Site | Google Scholar
  62. S. K. Goel, N. Haryani, P. Tiwari, A. Jain, and P. Kuvalekar, “Smart phone for elderly populace,” International Journal of Research in Engineering and Technology, vol. 2, no. 10, pp. 33–37, 2013. View at: Google Scholar
  63. R. Luque, E. Casilari, M.-J. Morón, and G. Redondo, “Comparison and characterization of android-based fall detection systems,” Sensors, vol. 14, no. 10, pp. 18543–18574, 2014. View at: Publisher Site | Google Scholar
  64. A. K. Bourke, P. van de Ven, M. Gamble et al., “Evaluation of waist-mounted tri-axial accelerometer based fall-detection algorithms during scripted and continuous unscripted activities,” Journal of Biomechanics, vol. 43, no. 15, pp. 3051–3057, 2010. View at: Publisher Site | Google Scholar
  65. S. Wang, Z. Xu, Y. Yang, X. Li, C. Pang, and A. G. Haumptmann, “Fall detection in multi-camera surveillance videos: experimentations and observations,” in Proceedings of the 1st ACM International Workshop on Multimedia Indexing and Information Retrieval for Heathcare (MIIRH '13), pp. 33–38, ACM, October 2013. View at: Publisher Site | Google Scholar
  66. M. Grassi, A. Lombardi, G. Rescio et al., “An integrated system for people fall-detection with data fusion capabilities based on 3D ToF camera and wireless accelerometer,” in Proceedings of the 9th IEEE Sensors Conference (SENSORS '10), pp. 1016–1019, November 2010. View at: Publisher Site | Google Scholar
  67. G. Koshmak, M. Linden, and A. Loutfi, “Dynamic Bayesian networks for context-aware fall risk assessment,” Sensors, vol. 14, no. 5, pp. 9330–9348, 2014. View at: Publisher Site | Google Scholar

Copyright © 2016 Gregory Koshmak et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.