Abstract

Emergency situations associated with falls are a serious concern for an aging society. Yet following the recent development within ICT, a significant number of solutions have been proposed to track body movement and detect falls using various sensor technologies, thereby facilitating fall detection and in some cases prevention. A number of recent reviews on fall detection methods using ICT technologies have emerged in the literature and an increasingly popular approach considers combining information from several sensor sources to assess falls. The aim of this paper is to review in detail the subfield of fall detection techniques that explicitly considers the use of multisensor fusion based methods to assess and determine falls. The paper highlights key differences between the single sensor-based approach and a multifusion one. The paper also describes and categorizes the various systems used, provides information on the challenges of a multifusion approach, and finally discusses trends for future work.

1. Introduction

According to the latest United Nation statistic reports, the mean age of the population is expected to grow rapidly in developed countries within the next several decades [1]. This will subsequently increase the cost of the healthcare and result in significant loss for the national budgets. At the same time fall injury is considered to be one of the most common risks among an elderly population. The estimated fall incidence for both hospitalized and independently living people over the age of 75 is at least 30% every year. Close to half of nursing home residents experience falls each year, with 40% falling more than once [2]. These accidents can often have both physical [3] (often head and hip injury) and psychological [4] (fear of falling) consequences. Other serious issues associated with falls include unconsciousness after falling, recovery time due to fall related injury, and death, and many of these issues can be overcome by improving medical response level and rescue time.

The recent development in information and communication technology has triggered an intensive research effort towards detection and prevention of emergency situation associated with falls. This area is commonly considered as a part of Ambient Assisted Living (AAL) community which is a multidisciplinary field exploiting ICT in personal healthcare for countering the effects of aging population [5]. Modern AAL systems can also help to promote independent lifestyle for elderly people with multiple chronic disease in a situation of rapidly increasing healthcare costs [6] and assist in the task of prevention.

Commonly, fall detection systems are categorized into three different classes depending on the deployed sensor technology which includes wearable devices, ambient sensors, and vision-based sensors. The article by Noury et al. [7] from 2007 which contains description of systems, algorithms, and sensors for automatic fall detection can be considered as one of the first surveys in the field. Relatively recent status is described in publications by Mubashir et al. [8] and Igual et al. [9] providing valuable knowledge about principles, trends, issues, and challenges in fall detection area. As the number of contributions continued to expand some authors prefer to review a specific category within the field, that is, article by Bagalà et al. [10], specifically evaluating worn sensors-based fall detection methods. In publication by Otanasap and Boonbrahm [11], the focus is made on computer vision exploiting various processing techniques to analyze critical phase and postfall phase. However, a recent trend is characterized by combination of different data sources, which are processed by a single multisensor fusion algorithm [12]. This novel approach can potentially provide a significant improvement in reliability and specificity of fall detection system but has never been reviewed before.

In this paper we present a systematic survey of fall detection research with focus on using multisensor fusion as a main method. Our aim is to provide a general insight into this novel approach and show its benefits compared to other methods in this emerging area. Unlike techniques with a single source channel, a multifusion approach exploits a combination of unrelated devices which are later fused on a data processing level. An explicit search was conducted deploying major databases like Google Scholar, IEEE, PubMed, and Mendeley with keywords including multisensor fusion, sensor combination, context-aware fall detection, and wearable fall detection. In total 299 related publications were found; 68 of them were sorted out based on relevance and representation and included in the final version of the paper. All the reviewed studies are categorized depending on sensors used for monitoring: combination of wearable/ambient devices, only wearable or only ambient. Cases when monitoring is based on sensors both from the same category and of the same type (i.e., multiple cameras) are not considered. We also give an assessment to this novel approach and discuss its perspective in the nearest future.

The rest of the paper is organized as follows. in Section 2 we provide a general definition of the fall and describe its major characteristics. Section 3 gives a brief overview of the popular trends and approaches in modern fall detection together with major benefits and challenges. We proceed with detailed survey on publications deploying multisensor fusion which is followed by discussion of the presented approach and its future perspective in Section 4.

In the following section we will demonstrate complexity of the falling process, define various types of falls, and discuss several main characteristics which constitute a fall. A fall is commonly defined as “unintentionally coming to rest on the ground, floor, or other lower level.” Losing the balance and subsequent falling with the help of an assistant are also considered as a fall [13]. Based on possible scenarios 4 main types of falls can be distinguished: (1) fall from sleeping, (2) fall from sitting, (3) fall from walking/standing, and (4) fall from standing on support tools such as ladder. Each type has its own unique characteristics, which can help developers to adapt fall detector platforms to a wider spectrum of user requirements. According to the recent studies falls are more likely to occur inside patients’ room and in the bathroom or toilet during activities such as moving/transferring and showering/toileting [13, 14]. Weight, size, and corpulence of the person have also a substantial impact while determining the cinematic of falling. The majority of patients in the risk group usually fall in the evening or at night. Therefore, falls databases are very limited due to the lack of records made in real-life testing [15].

Fall detecting techniques can be categorized into three different generations: first-generation systems that rely on the user to detect the fall, second-generation systems that are based on the first-generation systems but have an embedded level of intelligence, and third-generation systems that use data, often via ambient monitoring systems, to detect changes (e.g., changes in activity levels) which may increase the risk of falling (or risk factors for other negative events). The third-generation systems are more preemptive rather than reactive approach [16]. We will mostly focus on fall detectors from the second and third categories, which are discussed in terms of sensor fusion applicability in Section 3. Typically all the modern fall detection systems can be split into 3 main classes (see Figure 1) depending on the sensor technology deployed for monitoring: wearable sensors, ambient sensors, and vision-based sensors [17].

A vast majority of recently developed fall detection systems operate based on one general framework including (1) data acquisition, (2) data processing/feature extraction, (3) fall detection, and (4) caregiver notification. This framework can vary depending on number of devices involved in the monitoring, communication protocols for alarm delivery, and the end user, who is responsible for taking actions in case of emergency. In wearable sensor based systems data acquisition is often performed by using an accelerometer (sensing changes in orientation of wearable device), a gyroscope (which detects angular momentum), and/or other types of sensors like barometers, magnetometers, or microphones. An ambient sensor approach often includes infrared sensors, vibration, or acoustic sensors. The first type can locate and track thermal target within a sensor’s field of view [18]. Vibration sensors are able to differentiate vibration patterns acquired from Activity of Daily Living (ADL) and falls; meanwhile, acoustic sensors use loudness and height of the sound to recognize the fall. Unlike wearable sensor techniques, this approach is considered to be the least obtrusive as it implies minimum interaction with the patient [19]. The last fall detection method performs data acquisition via a set of video cameras embedded into monitoring environment [20]. Vision-based systems can carry out inactivity detection and analyze the body shape changes or 3D head motion. They provide an unobtrusive way to monitor the person of interest and rapidly decrease in price [21]. Several studies managed to achieve significant results in reducing positive false alarms while using single sensor technology representing each of the categories. However, performance of the comprehensive approach indicates a significant raise in efficiency keeping reliability value over 90% (see Table 1). Therefore, a combination approach is among the latest trends in fall detection/posture recognition studies listed by Augustyniak et al. [22]:(i)building sensor networks instead of focusing on a sensor set for a particular disease,(ii)promoting multipurpose health prediction and prevention instead of monitoring patients with known medical records,(iii)designing monitoring process based on patients health conditions, habits, and life-style,(iv)unconstrained mobility of the monitored person,(v)real-time fusion and cooperation of ambient and wearable sensor networks.

Preliminary results [23] demonstrate significant improvement of fall detection system performance after deploying several sensor functionalities in one system. It can help to improve system performance and provide significant reduction of false positive alarm rate.

Assuming the overall complexity of the fall kinematics and diversity of fall characteristics, described in Section 2, we believe that a multisensor fusion approach is likely to become widely used in the fall detection area. Moreover, there is a strong demand in high standard of independent living for elderly people [30] and therefore particular focus should be on the unobtrusiveness of such systems. Single sensors-based systems are sometimes characterized by a low reliability rate or can only detect particular types of fall in specific environments or circumstances. In the following section we give a brief description of multisensor data fusion method, describe its adaption for fall detection area, and suggest possible classification of main approaches.

3. Sensor Fusion in Fall Detection

Multisensor data fusion is a technology to enable combining information from several sources in order to form a unified picture [31]. Systems based on data fusion are now successfully exploited in various areas including sensor networks [32], image processing, and healthcare [33], where they demonstrate enhanced performance in terms of accuracy and reliability compared to single source based systems [34]. Modern healthcare systems commonly deploy data fusion algorithm to avoid intrinsic ambiguities caused by exploitation of unrelated type of sensors. In study by Medjahed and Istrate [35] tele-monitoring system is proposed to integrate physiological and behavior data, the acoustic environment of the patient, and medical knowledge. In this case data fusion approach is based on fuzzy logic with a set of rules corresponding to medical recommendations and proved to increase the reliability of the whole system by detecting several distress situations. Another example of data fusion in healthcare is proposed in article by Yang and Huang [36], where Kinect and color cameras are combined together to perform human tracking and identification. Begum et al. [37] make an attempt to classify “stressed” and “relaxed” individuals fusing data from various physiological sensors, that is, Heart Rate, Finger Temperature, Respiration Rate, Carbon Dioxide, and Oxygen Saturation. In this case fusion algorithm performed on decision and data level is additionally combined with Case-Based Reasoning for further classification. The experimental results demonstrate an increased accuracy in comparison with an expert in the domain.

As it was previously mentioned in Section 2 fall detection systems based on single sensor technology are often lacking sufficient accuracy rates and require additional work to improve reliability. For instance, both ambient and video type of frameworks commonly have a constrained monitoring area and require installation, adjustment, and maintenance which can result in higher costs. At the same time, wearable sensors have issues including a certain level of unobtrusiveness if the users have to wear them under long period of time. Additionally, information collected during the monitoring process is communicated via wireless channels which are not completely reliable. Taking into account mentioned observations, we believe the main benefit of fusion approach is its flexibility in terms of changing environment and potential demands of the patient/user. Multisensor based systems can be easily adjusted to the current monitoring instance (indoor/outdoor scenario), provide a better insight into elderly falling problem (additional data sources), and initiate fall prevention analyses. In this case, continuous data collected from multiple sources can be analyzed for reoccurring patterns as suggested in our previous publication [38]. Multisensor fusion has proved its efficiency in various areas of the healthcare domain [37] and subsequently gained its popularity in fall detection domain. Moreover, with a recent development on ICT market more sensors are now available and can be combined to perform advanced level of activity tracking, which will increase number of publications.

In Section 2 fall detection methods were classified into three main categories based on different types of sensor technology: wearable, ambient, and vision-based (see Figure 2). According to the vast majority of recent publications within fall detection domain same types of sensors are involved in multisensor fusion process with 2 major exceptions: (1) ambient and vision-based sensors are both integrated into environment and can be considered as a unified context-aware category and (2) wearable devices can be combined together with context-aware sensors comprising additional category. Assuming these corrections we propose an alternative approach to classify all fusion systems operating in the fall detection domain. Unlike single based methods, the choice of category does not depend on utilized sensor technology but corresponds to a sensor type which is being fused: context-aware sensors, wearable sensors, and combination between context and wearable. In the rest of the paper we review each category, present the most significant studies in multifusion fall detection domain, and discuss its possible challenges and limitations.

3.1. Context-Aware Sensors Fusion

According to the recent review on fall detection methods, most of the systems which use a multimodal approach are wearable sensor oriented and exploit 3-axial accelerometers as a part of the process [39]. However, there is a number of works providing solutions that are excluding wearable sensors from the monitoring and fall detection in particular. These types of systems are effective when unobtrusiveness is the main requirement and patient rejects to wear any external devices on his/her body. They can detect persons movements and collect information regarding the usage of furniture or household items and answer questions regarding the patients activity: that is, “is the patient eating/exercising regularly?” [40]. At he same time their operation capabilities are highly limited by the area of distribution.

Typically sensors involved in the context-aware monitoring are represented by cameras [24], vibration sensors [27], sound detector [26], pressure mats [29], and floor or infrared sensors [28]. Table 2 gives an overview of the most significant studies in this area. All the works are compared based on publication year, sensors involved in the monitoring process, multifusion algorithm, experimental part, and evaluation results depending on their availability.

Unlike single sensor-based approach, where feature extraction is followed by data classification, multisensor systems perform independent data analyses for each sensor technology with fusion method as a final step in fall detection [41]. Variation of the multisensor fusion techniques in each category including context-aware systems is highly dependent on sensors deployed in each study. In the early study from 2008 Huang et al. [25] propose a new human fall detection method based on fusing sensory information from a vision system and a laser ranger finder (LRF). In order to obtain data fusion from two sensors, unrelated types of measurements are integrated into the image coordinate with a focus on the distance between the head and the center of two legs. Finally, the actual fall detection is based on probability distribution function (PDF) and simple rule approach. Another interesting solution is suggested by Zigel et al. [26] where microphone used to track the sound is combined with accelerometer, which is capturing floor vibrations after patient falls. In this case feature extraction and Bayes decision rule classifier provide information fusion. Camera systems are used in several studies and accompanied either by acoustic sensors [21] or by PIR sensors together with thermopiles [24]. Thresholding approach after preliminary segmentation and fuzzy logic combined with location/posture direction are used as fusion techniques, respectively. Ariani et al. [29] are using wireless ambient sensors (motion detectors and pressure mats) to track the movement of multiple persons and later apply decision tree algorithm to unobtrusively detect falls when they occur. Another fall detection system consisting of vibration sensor and two PIR sensors is primarily based on winner-take-all (WTA) decision fusion algorithm [27], which is activated after preliminary processing of measurements collected from both sources. Finally Hidden Markov Model (HMM) is deployed in [28] to combine infrared and sound sensors. The most popular sensor deployed by context-aware systems is PIR type mentioned in 3 different publications. At the same time, no preference was given to any specific algorithm applied for fusion analyses.

Initial sensor setup and preliminary processing play an essential role in subsequent evaluation of the system. Brulin and Courtial [24] deployed a Health Smart Phone and recorded 15 video sequences illustrating situations of everyday life or an emergency performed by two subjects. In another example experimental part is split into two related steps. First, the possibility distribution of “normal walking” is investigated and finally the validation of the fall detection method is performed. Other examples include dropping of “Rescue Randy” doll, falling and speech sound generation, ADL, and falls simulations. At this point, variability of trial approaches indicates an absence of common strategy for evaluation of the context-aware multisensor fusion systems. Due to the high variability of devices deployed for multisensor it becomes complicated to analyze and unify all the methods involved in the process or determine the most reliable one. Further investigation and experimental work are required.

Moreover, it is important to mention that none of the analyzed research works managed to perform experiments with elderly people, a group which is potentially in high risk of falling. This can be explained by patients privacy, which is still a sensitive issue in terms of ambient and especially vision-based fall detection systems. For historic reference, one of the first projects in the area was forced to shift from image processing to body placed sensors due to privacy concerns [42]. We believe this problem can be partly solved by alternate deployment of camera-based and ambient sensors depending on location, change in environment, or current emergency situation. In this way monitoring that violates patients privacy is only performed when the calculated risk of falling is significantly higher.

3.2. Wearable Sensors Fusion

With the recent development on ICT market, wearable devices start to play an essential role in modern healthcare systems. Most of them have already been utilized for automatic fall detection and showed its efficiency compared to other methods [5759]. Unlike context-aware sensor technology, wearables attached to a patient’s body do not affect their privacy and can therefore perform monitoring on extended periods of time. In most of the cases fall detection systems based on fusing wearable sensors include accelerometer device as a main source of data. They are often complemented by other types of wearables, that is, gyroscopes and magnetometer [42], location tags [45], or barometric pressure sensors [44, 47, 48] (see Table 3). Moreover, physiological devices combined with accelerometers can be considered as a separate subgroup due to specific synchronization requirements and processing of collected measurements. For example, Yi et al. [46] deploy temperature sensor and ECG together with accelerometer and perform individual data processing for each device later fused into a unified alert message for medical staff.

Similar to the previous category described in Section 3.1, multisensor fusion algorithms applied for wearables can vary depending on specific device or authors choice. Three different techniques were deployed to combine body-worn inertial sensors and air pressure sensors including heuristically trained decision tree classifier, feature extraction/thresholding, and SVM. According to evaluation results, decision tree and feature extraction/thresholding are more efficient with 96.9% and 94.12% of accuracy. However, unlike studies in [44, 47], Greene et al. [48] perform experimental part with older adults which significantly affect the final result (see Table 3). In study by Felisberto et al. [42] a mash-up of various methods including fuzzy logic, extended Kalmar filter (EKF), direct cosine matrix (DCM), and control algorithm is applied in order to fuse accelerometer, gyroscope, and magnetometer. Fall detection based on movement and sound data is performed by Doukas et al. [43, 56], where accelerometer is deployed together with microphones and collected data is fused by Support Vector Machine technique. An accuracy increase by 40% was demonstrated in [45] after accelerometer sensor was combined with location tag by rule-based reasoning. However, for a final discussion regarding the positive results, experimental conditions should be taken into account.

In the vast majority of fall related studies evaluation process is mainly performed by healthy volunteers or based on simulation [38, 60, 61]. This fact makes it almost impossible to give an accurate assessment for operational capabilities of developed system or reliability of deployed algorithm. More experimental data received from elderly population should be analyzed in order to improve sufficiency of developed fall detection system. In study from 2012 Greene et al. [48] estimate the risk of falling through multisensor assessment of standing balance. Pressure sensitive platform and body-worn inertial sensor are utilized during evaluation, which is based on monitoring 120 community dwelling older adults. It is one of few research studies where trials with elderly population are included as evaluation criteria. As a result the overall performance of the system was significantly affected demonstrating only 71.52% of classification accuracy; meanwhile, the rest of the methods can reach 95%–97% for specificity, sensitivity, and accuracy. Fall detection systems based on wearable devices is still a novel method, therefore lacking a unified approach to effectively combine sensors due to different formats of collected data.

At the same time, additional number of digital devices attached to patient’s body are inconvenient for the users and can potentially lead to a low acceptance rate of this method. This issue can be solved if different types of wearable sensors are incorporated in a single device performing collection of unrelated types of data simultaneously. This will help reduce data loss, improve processing time, and at the same time maintain patients independent life-style without affecting their privacy. Modern mobile phones are already equipped with advanced sensor functionality and can be suggested as a tool for synchronization and processing of collected measurements. However, modern smartphones and gadgets are still poorly distributed among elderly people [62], which complicates deployment and further progress of the proposed methodology.

3.3. Wearable/Ambient Sensor Fusion

The last category is characterized by combination of previously presented approaches and can potentially help to detect a wider spectrum of possible emergency situations connected with falls. Context-aware fall systems can provide long-term trend analysis describing patients behavior and recognizing abnormal patterns but are often limited by the area in which they can be used and distribution. Wearable fall detection is becoming increasingly available due to cheap embedded sensors included in smartphones and demonstrates relatively high performance but still produces significant number of fall alarms [63, 64] and has been mainly tested in laboratory environments. As a result, research studies which make an attempt to merge major benefits of both approaches into a self-complementing system are surpassing other methods by a number of publications (see Tables 2 and 3). In Table 4 we review the most significant studies to demonstrate the latest trends in multisensor fusion for context-aware and wearable sensors.

This approach is considered relatively new and therefore requires thorough investigation and experimental work. As a result, the choice of sensors can vary significantly from one study to another. Most of the systems deploy accelerometers as a main device which are additionally combined with either ambient sensors or 3D cameras [65, 66]. Other wearable devices can be represented by gyroscopes, microphones, physiological sensors, sound analyzer, infrared sensors, or RFID tags. Della Toffola et al. [51] in their study from 2011 pick up a different approach and complement a set of ambient and body-worn sensors with a home robot in order to improve fall detection. Slightly different concept is presented by McIlwraith et al. [52] and Kepski et al. [53] where accelerometer and gyroscope are accompanied by vision-based sensors. In the first publication surrounding vision sensors are deployed for accurate characterization of motion, and in the second case authors used commercially available microsoft Kinect camera instead and performed reliable fall detection. In some cases gyroscope can be replaced by microphones [43] or alternatively by RFID tags [63], with embedded tracking camera and accelerometer still being part of the framework.

Due to high diversity in sensor technology deployed for fall detection, the choice of algorithms performing fusion function is still unique in each research work. The most common approach to combine wearable and context-aware systems includes individual low-complexity algorithms for every sensor technology, which are then followed by more advanced fusion algorithm. None of the reviewed studies deployed thresholding technique on individual or fusion level and the only example of rule-based approach was complicated by semantic web rule language. The most popular algorithm is fuzzy logic utilized as fuzzy inference system [53] or fuzzy logic decision tree [55]. Other methods include evidential networks, Dempster-Shafer theory, or Hidden Markov Models. Similar to the previous categories, there is no possibility to determine a common approach or justify the choice of fusion methods since there is not enough experimental evidence to operate with.

Similar to previous categories, variation in sensors and methods deployed for multimodal fusion has a significant effect on experimental part of research. The evaluation process can be characterized by two different scenarios: (1) online testing with volunteers subsequently performing ADL or falls and (2) offline evaluation utilizing previously collected measurements. In both cases, combination of wearable and context-aware approaches had a positive impact and resulted in increased specificity, sensitivity, and accuracy. Doukas and Maglogiannis [56] in their attempt to merge tracking camera, accelerometer, and microphones managed to minimize the amount of fall positive alarms to zero. Evaluation based on elderly patients in real home-like environments is still a sensitive issue, assuming complexity of the sensor setup in this case.

In our previous studies we proposed a multisensor fusion system based on Dynamic Bayesian Networks and combined wearable device with context-aware sensor framework [67]. All the accelerometer measurements were obtained from the android based smartphone and analyzed for possible falls. Context-aware information was obtained from environmental sensors network consisting of PIR motion, door contact, pressure mats, and power usage detectors embedded into a smart home and deploying a special context recognition algorithm to deliver user activities. Physiological data was later interfered with ambient measurements and processed in Dynamic Bayesian Network performing fall detection. Evaluation process contains both simulation (MATLAB tools) and demonstration part (healthy volunteer). With the proposed technique we managed to compliment 2 different fall detection approaches and improve the reliability of the fall detection system. However, it is still far from deployment of developed or similar systems in everyday geriatric practice or explicit examples of commercially successful applications. Moreover, the vast majority of similar systems obtain high experimental results in unrealistic or restricted conditions with a pure reference to real-life environments, which is among the issues of this approach. Other challenges and limitations of multisensor fusion method in fall detection are discussed in Section 4 of the review.

4. Discussion

4.1. Challenges

Most of the challenges specific for modern single-based fall detection systems are still valid in case of multimodal approach. Igual et al. [9] provide a number of typical problems which can affect final results including (1) lack of performance under real-life conditions, (2) limited usability (which mostly applies to wearable and smartphone-based fall detectors), and (3) lack of publications regarding practicality and acceptability of modern fall detection technologies. Other suggested issues are connected with privacy concerns, lack of human contact, and limited experimental conditions.

After including additional sensor functionality a single-based fall detector becomes a multimodal system inheriting challenges typical to other frameworks with data fusion requirement. Khaleghi et al. [31] introduced these issues in their study starting with imperfection of the collected data and diversity or low reliability of sensor technologies. Based on reviewed material we can complement the list of challenges in data fusion for fall detection with the issues listed below. All these items should be analyzed and taken into consideration before developing the fall detection framework.

4.1.1. Cost Efficiency

As previously mentioned in Section 3.3 multisensor fusion helps to improve reliability of fall detection system. At the same time additional medical devices can significantly increase the final cost of the monitoring framework. In this case cost efficiency assessment becomes an essential part of evaluation process. Our recommendation is to create a flexible structure which will allow us to adjust the number of components depending on individual contribution to the overall performance of the system.

4.1.2. Conflicting Output

During the monitoring process similar activities can be interpreted in a different way by unrelated sensor platforms. The amount of false alarms among modern fall detections is still relatively high. Therefore, it is essential to give a priority to the technology which is more reliable and can minimize unclassified falls or ADLs.

4.1.3. Data Correlation

Measurements collected during the monitoring process in case of multisensor fall detection are typically coming from different backgrounds and are unrelated to each other. These data should not only be merged together in a most efficient way, but also be analyzed for possible common trends and similarities.

4.1.4. Processing Framework

Firstly, majority of the systems analyze data for each component independently and deploy fusion algorithm as a final step to combine acquired results [33]. However, in some cases raw data collected from each sensor unit can be delivered to the common framework without preliminary processing. Alternatively, in case of wearable and context-aware fusion, particular categories can be processed in conjunction (i.e., various types of ambient sensors) and later fused with sensors from unrelated category. As a result, it leads to unnecessary complication of the fusion algorithm and subsequent increase in computational time.

4.1.5. Computational Power

Multiple monitoring items result in additional amount of data collected by the system and will subsequently increase computational costs. This issue can be avoided by separating data analyses into several stages including preprocessing, data filtering, and feature extraction. Each particular type of processing can be performed by a separate component with a processing center, where the final decision is made.

Another drawback, which is particularly specific for modern multisensor fusion systems, is a lack of simplified evaluation procedure. In a vast majority of articles evaluation method is often based on simulations or this information is not available at all. It is partly caused by complexity of the monitoring setup in real environment. Sensor functionality should be embedded into the regular apartment or specially designed test environment. Moreover, similarly to regular fall detection systems fusion based methods are evaluated on simulated falls performed by healthy volunteers, which is far from the real-life scenario. Testing with real patients who suffer from falling can help to improve the process; however, it requires ethical content and additional complications and is commonly not available in fall detection studies. Additional complexity is caused by distinctive technological background of the sensor technology involved in the monitoring process. This issue is specific to any fusion based system and becomes essential when developing the multisensor fall detection mechanism.

4.2. Future Trends

Based on the majority of reviewed papers the main trend in multisensor fall detection can be characterized by merging sensor technologies from different categories and unrelated platforms. Systems developed with this approach are fully interchangeable and can maintain monitoring even when one of the components is inactive.

4.2.1. Physiological Sensors

Most of the elderly patients suffer from various health problems including heart problem or Alzheimer which increases probability of falling in their daily life. Therefore, it is important to track patients activity in conjunction with significant physiological parameters. Physiological sensors combined with fall detectors can help to understand correlation between patients activity and health conditions and make monitoring process more detailed.

4.2.2. Long-Term Analyses

Monitoring people with high risk of falling on a regular basis during the long period of time will improve data analyses and help to detect interesting patterns. In perspective we will be able to develop an algorithm which can prevent the fall in case dangerous measurement sequence is repeating itself in time.

4.2.3. Integration into Smart Home Environments

Long-term analysis is almost impossible without an appropriate sensor setup. In many cases sensors are already integrated into everyday routine in form of smart home environments collecting valuable information regarding user’s presence in the house. They can be further adopted for patient medical tracking and reliable fall detection without additional installation costs.

4.2.4. Patient-Oriented Systems

Assuming the individual approach in patient treatment most of the multimodal healthcare systems should be more patient-oriented. The choice of sensors and processing techniques should correspond to the actual patient demands and major health problems they are suffering from. Otherwise, developed platforms should cover a wide spectrum of healthcare problems or be as much universal as possible.

Due to complexity of falls and variation in falling circumstances the most effective approach implies fusing information from sensors related to different categories. As a step towards a full-scale remote monitoring framework, fall detection components can be deployed in conjunction with other healthcare systems to check patients well-being on a long-term basis. Following the recent trend, we suggest building a special environment with wearable, ambient, and vision sensors, where fusion techniques can be effectively evaluated. At the same time, it is recommended to complement these types of smart environments with additional sensor technology only based on current patients’ demand or particular monitoring case in order to avoid data overload and unnecessary privacy violations.

5. Conclusion

Fall detection systems play an essential role in modern healthcare. Latest sensor technologies are deployed in order to distinguish between falls and regular ADLs with a recent trend to combine unrelated data sources. In the presented study we conducted a search among the latest works based on multisensor fall detection systems and made an attempt to classify all systems into various categories. Analyzed materials allowed us to start a useful discussion regarding major challenges faced by multifusion approach, its issues, and limitations. Based on this discussion we can suggest core topics that should be considered in fusion methodology in the future. Among other things we would like to make a special focus on (1) developing a multifunctional monitoring platform, where each component/sensor can be easily adjusted or removed depending on user demand or monitoring circumstances and (2) organizing continuous monitoring/experimental sessions involving elderly population in order to improve acceptability of fall detection systems. Both suggestions will introduce a certain level of structure to this novel but rapidly evolving approach and help to unify the choice of algorithm in each particular monitoring case.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.