Journal of Sensors

Journal of Sensors / 2020 / Article
Special Issue

Sensor Systems for Personal Wellbeing and Healthcare

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 5689860 | https://doi.org/10.1155/2020/5689860

Waranrach Viriyavit, Virach Sornlertlamvanich, "Bed Position Classification by a Neural Network and Bayesian Network Using Noninvasive Sensors for Fall Prevention", Journal of Sensors, vol. 2020, Article ID 5689860, 14 pages, 2020. https://doi.org/10.1155/2020/5689860

Bed Position Classification by a Neural Network and Bayesian Network Using Noninvasive Sensors for Fall Prevention

Guest Editor: Andrej Kosir
Received16 Aug 2019
Revised09 Dec 2019
Accepted03 Jan 2020
Published31 Jan 2020

Abstract

Falls from a bed often occur when an elderly patient attempts to get out of bed or comes close to the edge of a bed. These mishaps have a high possibility of serious injuries, such as bruises, soreness, and bone fractures. Moreover, a lack of repositioning the body of a bedridden elderly person may cause bedsores. To avoid such a risk, a continuous activity monitoring system is needed for taking care of the elderly. In this study, we propose a bed position classification method based on the sensor signals collected from only four sensors that are embedded in a panel (composed of two piezoelectric sensors and two pressure sensors). It is installed under the mattress on the bed. The bed positions considered are classified into five different classes, i.e., off-bed, sitting, lying center, lying left, and lying right. To collect the training dataset, three elderly patients were asked for consent to participate in the experiment. In our approach, a neural network combined with a Bayesian network is adopted to classify the bed positions and put a constraint on the possible sequences of the bed positions. The results from both the neural network and Bayesian network are combined by the weighted arithmetic mean. The experimental results have a maximum accuracy of position classification of 97.06% when the proportion of coefficients for the neural network and the Bayesian network is 0.3 and 0.7, respectively.

1. Introduction

Due to the significant growth of the elderly population in today’s demography, the needs of geriatric care have increased. A survey of the National Statistical Office of Thailand has shown that the single elderly increased to 8.7% in 2014, and 18.76% of the elderly population live with their spouse only [1]. This is one of the results from gradual changes in the Thai social structure. Most young people have to spend their time earning money. Increasingly, elderly people are abandoned to stay alone at home or are left in a nursing home during working hours.

Without being carefully watched over, elderly patients can fall which is a major cause of trouble in nursing care [2]. The accident can cause severe injuries, such as bruises, soreness, and bone fractures. The National Statistical Office of Thailand reported that 11.6% of elderly people have experienced a fall, and 46.3% of them were treated and 7.8% of them were hospitalized as an inpatient [1]. Moreover, falls are the leading cause of death and disability in elderly people, as high as 40.4% [3]. An injury at a higher age has a higher possibility of death because of health weaknesses [1]. The Department of Disease Control, Ministry of Public Health of Thailand, reported that 1,049 elderly people died from falls in 2015 [4]. In 2014, Tsai et al. conducted a study of the factors of fall injuries in the elderly patients at a medical center in Taiwan [5]. They found that 8.7% of elderly patients who participated in the study had repeatedly fallen in the previous year. Some of them (28.6%) fell at the bedside, in which most of the cases are an unassisted bed exit [5]. However, there is also a risk of rolling out of bed when an elderly patient lies too close to the edge of the bed. In addition, the bedridden elderly patients are often unable to reposition themselves, which is a cause of bedsores. Desirable bodily movement can alleviate the prolonged pressure over the body. The most widely accepted way of preventing bedsores is to turn the elderly body every two hours. Therefore, continuous monitoring is inevitable for the elderly to prevent falls and bedsores. This requires a large number of caregivers with respect to the growth of the elderly population. Geriatric care can be highly costly and faces a shortage in the number of caregivers, which is only 11.1% of the elderly population as reported in [1]. This is leading to an inefficiency of nursing care services in the near future. The monitoring system for bed fall and bedsore prevention can be a complementary utility to support caregivers and to diminish their workloads. The system must be able to detect the position of an elderly patient on the bed and movement which comes close to falling in an allowable time period for a caregiver to assist the elderly patient.

A wearable device is widely used for an elderly activity monitoring system [6, 7]. However, in most cases, the elderly can feel uncomfortable with their daily living activities, which leads to discontinuous monitoring [8, 9]. Also, a video camera is unacceptable for the elderly because of privacy concerns. As a result, a noncontact sensing device is a proper approach for continuously monitoring elderly activity [1029]. There are some reports of using an ultrasonic sensor, air pressure sensor, and vibration sensor [1012]. Though the aforementioned studies can determine whether the patient is in the bed or not, this is not enough to prevent a fall. To prevent falls, the system needs to detect the position of lying with respect to the edge of the bed. To prevent bedsores, the duration of the same position of lying can be observed by monitoring movement. Some previous studies used commercial pressure mat systems to detect the bed position [1321]. However, their proposed pressure mat systems need a large number of sensors which are not practical and are costly in actual practice. Some studies proposed approaches to reduce the number of sensing array sensors. The minimum number of sensors in the aforementioned studies is 16 sensors, as reported by Hsia et al. [21]. Although the studies have shown promising results in bed position classification, their approaches still require quite a large number of sensors. For this concern, we reduced the number of sensors to only four in our study while maintaining high accuracy in bed position classification.

In this paper, we propose a bed position classification based on a neural network combined with a Bayesian network, with signals from only four sensors. The results of our study can be applied to prevent elderly bedsores and bed falls. We classify the bed positions into five classes, namely, off-bed, sitting, lying center, lying left, and lying right. The off-bed and sitting positions are highly important for detecting the bed exit activity because they normally are the positions just before or after the bed exit according to our statistical analysis. With the bed position, the system will alert the caregiver to assist an elderly patient to prevent falls when the elderly patient moves towards the edge of the bed. The system will also alert the caregiver to turn the elderly patient’s body when staying in the same position for almost the allowed time period (normally two hours) to prevent bedsores.

2. Materials

2.1. Sensor Panel

The sensor panel is a ready-made set of sensors provided by AIVS Co., Ltd. The size of the panel is . The panel is equipped with two types of sensors, i.e., two piezoelectric sensors and two pressure sensors. Each pair of sensors is embedded symmetrically on each side of the panel as shown in Figure 1. The sensors are sandwiched between two ABS boards, which has an advantage in stiffness and being firm and difficult to bend. The sandwich structure keeps the sensors firm avoiding the signal distortion as shown in Figure 2.

To detect the weight applied on the bed, we use two low-cost force-resistive sensors (FRS) from Interlink 402. Each is installed on the left and the right side of the panel. The FSR consists of two membranes separated by a thin air gap. Resistance decreases when force is applied, and resistance is infinite when force is zero. The force sensitivity range is ~0.2 to 20 N. The temperature operating range is -30 to 70°C. The sensor is low-cost and can be used to detect physical pressure, squeezing, and weight, though it is rarely accurate. However, its sensitivity is sufficient for detecting a weighted object on the bed.

The piezoelectric sensor, Murata Piezoelectric Diaphragms 7BB-15-6L0, can change the energy between the kinetic energy and electric energy. When a vibration force is applied, the voltage is changed. The resonance frequency is 2.8 kHz. In the sensor panel, it is installed to detect the vibration transmitted from the patient activities.

The combination of both pairs of different types of sensors is used to detect the position from each side of the body on the bed. The panel is simply set under the mattress in the thoracic area, as shown in Figure 3. The panel is fixed to the bed board to keep the constant relative position to the patient body. It is designed to work 24 hours in common use, avoiding the wet circumstance since there is not much temperature change in the hospital ward or at home. In our case, the sensor is not designed to be used in a severe condition.

Placing the panel in such a position can distinguish between sitting and lying positions on the bed. Figure 4 shows the correlation between signals of four sensors and positions. For example, Figure 4(a) is the signal of four sensors of the off-bed position. The activation of both sensor signals is low compared to the signal of the sitting position which has low activation of pressure signals while the signals from the piezoelectric sensors are still being detected. Normally, the signals from the piezoelectric sensors in any positions on the bed show high activation, whereas they are very low in the off-bed position. The pair of pressure sensors can be used to distinguish between the positions of lying. For example, in the lying center position, the weight of the body is on both sides of the sensors while in the lying left or lying right positions, only one side of the sensors is activated, as shown in Figures 4(c)4(e). In lying positions, the activation of pressure sensors is quite high in contrast to sitting positions in which the activation of the pressure sensors is low.

2.2. Data Structure

The control device outputs a package of data in a sample rate of 30 samples in one second. The data package contains 45 bytes. It is divided into 3 parts: 8 bytes for the header, 34 bytes for the data from four sensors, and 3 bytes for the ender. In the 34 bytes for the data from four sensors, the first two bytes contain the sensor ID, and the next 32 bytes contain four 8-byte blocks (one block for each sensor), i.e., the left piezoelectric signal, left pressure signal, right piezoelectric signal, and right pressure signal. The magnitude of the sensors is 256. The range of the value of the piezoelectric signal is -127 to 128, and the pressure signal is from 0 to 256. The sampling rate of each sensor is 30 Hz. Table 1 shows the details of the structure of the signal data package.


HeaderData from four sensorsEnder
Sensor IDPiezo rightWeight rightPiezo leftWeight left

8 bytes2 bytes8 bytes8 bytes8 bytes8 bytes3 bytes

2.3. Data Collection

The collected data include the sensor signal data and the corresponding videos. Three elderly patients, whose ages are between 60 and 85, participated in the experiment. To evaluate the effects of the environment, the data from two different rooms are collected with different sets of sensors. The data of two patients are collected from two different rooms. The total collected data are 459 hours long. The position labels are annotated by observing the corresponding video. The position labels are defined in five classes, i.e., off-bed (O), sitting (S), lying center (C), lying left (L), and lying right (R). The definition of each position is described as follows: (i)Off-bed (O): nobody is on the bed(ii)Sitting (S): a subject is sitting on the bed(iii)Lying center (C): a subject is lying in the center of the bed(iv)Lying left (L): a subject is lying on the left-hand side of the bed(v)Lying right (R): a subject is lying on the right-hand side of the bed

Lying left (L) and lying right (R) positions are defined as positions in which the subject is lying on either the left or the right side of the bed, regardless of the subject’s lateral position. An ambiguous position or changing movement is not considered in this experiment.

The structure of the dataset is shown in Figure 5. Each set of the accumulated data is composed of , which is called the time slot in one second. The holding period of one position is called the interval time. Normally, one position held in one interval time lasts more than one second. Therefore, there are many time slots in one interval time. The time length of each position can then be measured by accumulating the number of time slots. The change of interval time shows the change of position. The sequence of changing positions can then be detected by the sequence of time intervals.

3. Position Detection

3.1. Position Classification by the Neural Network

To classify a position on the bed, the signal data from the control device output, i.e., the left piezoelectric signal (), right piezoelectric signal (), left pressure signal (), and right pressure signal (), are used as the input for the neural network. These four inputs are passed through the neural network as defined in (1) and depicted in Figure 6.

Since the initial weight and the scale of the signal from the piezoelectric sensor and pressure sensor are different, instead of using the raw values from the sensors, we apply the unity-based normalization (or feature scaling) method to eliminate the biases of the weight from different bodies and the different types of sensors. All sensor data are normalized into the same range of 0 to 1 by [30] where is the normalized value, is the sensor data in the time position of the sequence, min is the minimum value, and max is the maximum value of the collection.

To accumulate the signal data in one second from the property of the sensors where the sampling rate is 30 Hz, one set of data is composed of 30 samples of 4 types of sensors, which makes 120 data signals as defined in

3.2. Estimation of Consecutive Position by the Bayesian Network

In normal practice, not all positions are equally transitioned to form a specific position. For example, it is more likely that a subject will sit on the bed before lying down to a sleeping position, while it is rarely found that a subject will jump to lying down on the opposite side of the bed. To estimate the next possible transition positions, the Bayesian network [31] is applied. This method can depress the noise of the signal that is caused by other activities in an uncontrolled environment. The probability of a consecutive position can be estimated by the former positions and the current signal, as shown in (4) and (5) for the trigram model estimation. where , , and are, respectively, the positions in the , , and time positions of the sequence. is the current set of signals consisting of four sensor signals (, , , and ). The normalized signal is divided into three levels, i.e., low, middle, and high, by converting the continuous values of signal data to nominal values. For piezoelectric signals, 0-0.25, 0.26-0.50, and 0.51-1 are defined as low, middle, and high, respectively. For pressure signals, 0-0.35, 0.36-0.70, and 0.70-1 are defined as low, middle, and high, respectively.

3.3. Combination of the Neural Network and Bayesian Network

We apply the weighted arithmetic mean for the combination of the results from the neural network and Bayesian network, as shown in Figure 7 and (5). where is the neural network probability, is the Bayesian probability, is classes, and and are coefficients where the sum of and is 1.

4. Experiment and Result

4.1. Input Feature Evaluation

To evaluate the coverage of the trained model, the clean datasets are prepared by eliminating the possible noise of the signals. The evaluation set is defined in five categories, i.e., subject A, subject B, subject C, the combination of subject A and subject B in the same room, and the combination of data from two different rooms (subjects A, B, and C). The features of input are conducted for 4 inputs, 120 inputs, 4 inputs with normalized signals, and 120 inputs with normalized signals.

The selected datasets are tabulated in Table 2. The dataset of subject A consists of 2,000 samples (), the same as subject B. This means that the combination of subject A and subject B in the same room includes 4,000 samples (). For subject C, collected from another room, the dataset includes 1,335 samples (). Totally, there are 5,335 samples () as shown in the “Total” row in Table 2. The datasets are selected from four different time intervals for each subject and randomly divided into 70% for training and 30% for testing. In the combination of two rooms, the dataset consists of 12 time intervals, as shown in the “Total” row in Table 2.


SubjectPositionClean dataset
Training setTest set
# of samples# of time intervals# of samples# of time intervals

AOff-bed28041204
Sitting28041204
Lying center28041204
Lying left28041204
Lying right28041204

BOff-bed28041204
Sitting28041204
Lying center28041204
Lying left28041204
Lying right28041204

COff-bed1874804
Sitting1874804
Lying center1874804
Lying left1874804
Lying right1874804

TotalOff-bed7471232012
Sitting7471232012
Lying center7471232012
Lying left7471232012
Lying right7471232012

Table 3 shows the result of the feature evaluation test with the small clean dataset. The overall performance on the 120 inputs with normalized signals can reach 100% accuracy. In total, the model based on the normalized signal data and the model based on the accumulated signal data of 120 inputs can provide a better result when compared to the 4-input model.


DatasetInput
Raw signal dataNormalized signal data
4 inputs120 inputs4 inputs120 inputs

A (room 1)99.399.899.699.9
B (room 1)99.5100100100
C (room 2)99.999.910099.9
A+B (room 1)97.698.298.298.8
Room 1+room 297.298.198.5100

In the off-bed and sitting positions, the signals are quite similar. For example, in the sitting position, the activation of the pressure sensors is low, similar to that in the off-bed position, but not for the signals from the piezoelectric sensors. Therefore, at some points, the signals of both positions look the same, as shown in Figures 8 and 9. In the case of 4 inputs, the accuracy of the off-bed position is 99.2 and that of the sitting position is 93.2, showing an error of 0.8 in classifying off-bed as sitting and an error of 6.8 in classifying sitting as off-bed, as shown in Figure 10.

The accumulation of the signal data in a one-second time slot (120-input set) can solve the confusion between the sitting position and the off-bed position, as shown in Figure 11. This is because by using the 120 inputs, the neural network can capture more context features, to distinguish the off-bed position from the sitting position.

Expanding the size of the dataset on the single subject A from 2,000 to 394,113 samples, we evaluated the features of input in four categories. The selected dataset includes many signal errors and unexpected noise. The dataset is also divided randomly into 70% for training and 30% for testing.

Table 4 shows the number of time intervals, sampled from the position data. The total size of the unclean dataset of the single subject A consists of 394,113 samples. The sizes of the 5 positions are 44,172, 32,012, 90,486, 4,820, and 222,643 for off-bed (O), sitting (S), lying center (C), lying left (L), and lying right (R), respectively. The total number of time intervals for each position is 42, 160, 111, 26, and 173, respectively.


PositionUnclean dataset
From one subject
Training setTest set
# of samples# of time intervals# of samples# of time intervals

Off-bed30,6504213,52242
Sitting22,4081609,604160
Lying center64,34011126,146111
Lying left3,674261,14626
Lying right15,585017366,793173
Total276922512117211512

The result of the feature evaluation test on subject A is shown in Table 5. The best result is 96.64% for the accuracy of the 120 inputs with normalized signals. The accuracy of the larger dataset decreases because of the signal ambiguity. The accuracy of the large and unclean dataset (Table 5) decreases, compared to that of the small and clean dataset as shown in Table 3. The best result of Table 5 is 96.64% while the best result for subject A in Table 3 is 99.9%. This is because the larger dataset includes much-unexpected noise.


Input
Raw signal dataNormalized signal data
4 inputs120 inputs4 inputs120 inputs

95.1795.4596.5496.64

4.2. Position Classification by the Combination of the Neural Network and Bayesian Network

The very large and unclean dataset includes many signal errors and much-unexpected noise. Figures 12 and 13 show some examples of errors. For the signals shown in Figure 12, the signals of the sitting position are similar to the signals of the lying right position. This is because the subject gets on/off on the right side of the bed. Before getting out of bed, the subject usually moves to sit on the right side of the bed, applying force on the right pressure sensor. Therefore, sitting before getting out of bed can cause the signal to look similar to lying right.

Similarly, the signals in Figure 14 show the similarity of signal patterns between the lying center position and the lying right position because the subject tends to stay on the right side of the bed.

To solve the problems of signal ambiguity, we introduce the Bayesian network to estimate the likelihood of the consecutive position, to eliminate the unexpected result of the output position from the neural network model. We create the Bayesian network from the large unclean dataset of subject A, as shown in Table 4. All possible connecting positions are calculated from the transition network, as shown in Figure 14. We estimate the Bayesian network by using the position trigram model, according to (5).

The results from both the neural network and the Bayesian network estimations are combined by the weighted arithmetic mean. To evaluate the coefficient (, ) of the weighted arithmetic mean in (5), the values of and are varied for the dataset of subject A, as shown in Table 6. is the coefficient of neural network probability, and is the coefficient of Bayesian network probability. The accuracy can reach 97.06% when the proportion of the coefficient for the neural network is 0.3 and that for the Bayesian network is 0.7, as shown in Table 6. As a result of this combination model, the Bayesian network effectively shows the improved performance in position estimation in the case of signal confusing errors.


Accuracy rate

1096.64
0.70.396.74
0.50.596.85
0.30.797.06
0191.40

Looking into the details of improvement, Figure 15 shows a significant change in recognizing the sitting position better by reducing the fault detection of the lying right position, while still maintaining other position classification in similar accuracy. The sitting position detection is improved from 86.10% to 89.07% by reducing the confusion errors of the detection with lying right and out of bed positions from 8.06% to 6.24% and 4.68% to 3.91%, respectively. The improvement of sitting position detection is crucial for care givers in making decision of supporting help.

4.3. Comparative Evaluation with Other Approaches

It is quite difficult to evaluate the performance against other approaches because of the differences in datasets, number of bed positions, and number of sensors. The best we can do is to compare the results on the estimation of the common target position. Table 7 is tabulated by accumulating the results of the sleeping position estimation only. Our approach can reach 97.8% accuracy in classifying the three sleep positions, i.e., lying center, lying left, and lying right. Our approach, using only four sensors, outperforms the approaches proposed by [13] using 2,048 sensors, [15] using 360 sensors, [18] using 2,048 sensors, [19] using 56 sensors, and [20] using 60 sensors in overall evaluation.


Ref# of positionsAccuracy (%)AlgorithmType of sensors# of sensors

[13]897.1kNNPressure sensors2,048
[14]398.4GMM+kNNPressure sensors1,728
[15]597.7PCA+SVMPressure sensors360
[16]598.1HoG+DNNPressure sensors2,048
[17]499.7SVMPressure sensors512
[18]597.7kNNForce sensing array2048
[19]683.5Raw data+SVMFSR sensors56
[20]994.1Joint feature extraction and normalization+SVM+PCAFSR sensors/video60
[21]3100Kurtosis+skewnessFSR sensors16
[22]598.4SVM (linear)+SVM (RBF)+LDACC-electrodes12
Ours397.8NN+Bayesian networkPressure and piezoelectric sensors4

In terms of a position-by-position comparison, there is only one report from Hsia et al. [21], which has the same three bed positions as defined by our model. The position-by-position comparison result is shown in Table 8.


RefAccuracy (%)# of sensors
LeftMiddleRight

[21]10010010016
Ours93.0596.4098.464

The result of our approach is not the best though it shows that our model is promising with a limited number of sensors, and the model can be created by a small number of testing subjects. In terms of practicality, our approach has advantages in cost performance and maintenance.

5. Conclusion

A bed alarm for fall prevention needs a highly accurate bed position detection system. The system must be able to issue an alert as early as possible once it detects a position where there is a high risk of falling. In this study, a neural network is used to classify the signals from the designed sensors into five types of positions. The signal data from the sensors are normalized by using the unity-based normalization (or feature scaling) method to eliminate the biases of body weight and different types of sensors. In addition, the accumulation of the signal data in a one-second time slot (a set of 120 inputs) can also help improve the accuracy of the sitting and off-bed positions. The performance of 120 inputs with normalized signal data yields a better result than the three other types of inputs, i.e., 4 inputs, 4 inputs with normalized signal data, and 120 inputs. Furthermore, when the dataset is extended to a large and unclean dataset, the accuracy of the single neural network model significantly drops. To improve the performance of the neural network approach, we adopt the Bayesian network to restrict the possibility of transition of a position. As a result, the Bayesian network trigram probability effectively improves the accuracy from 96.64% to 97.06%, with a coefficient of 0.3 and 0.7 for the neural network and the Bayesian network probability, respectively. The combination model essentially improves the sitting position detection from 86.10% to 89.07% by reducing the confusion errors of the detection with lying right and out of bed positions from 8.06% to 6.24% and 4.68% to 3.91%, respectively. The evaluation of our approach against others is also promising. Even though it cannot outperform some of the other previously proposed methods that need a large number of sensors, our approach needs only four sensors. It can be concluded that our approach can perform at high accuracy for position detection and requires the fewest number of sensors.

Data Availability

The data of this study are available from the corresponding author upon request.

Disclosure

The manuscript is based on the thesis of the author Waranrach Viriyavit.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The sensor panel and IP camera, utilized in data collection, are provided by AIVS Co., Ltd. under the Japan International Cooperation Agency (JICA) grants for SME development support. The video and patient data are recorded under the consent of the patients and acknowledgment of Banphaeo Hospital. We are very thankful to Mr. Shunichi Yoshitake, Chairman of AIVS, for his strong support of the equipment and the director together with the staff of Banphaeo Hospital for the overall support in data collection. The research is financially supported by the Thammasat University Research Fund under the National Research Council of Thailand (Contract No. 25/2561) for the project of “Digital platform for sustainable digital economy development,” based on the RUN Digital Cluster collaboration scheme.

References

  1. National Statistical Office, “The 2014 survey of the older persons in Thailand,” 2014. View at: Google Scholar
  2. U S DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics, “Long-term care providers and services users in the United States, 2015–2016,” November, 2019, https://www.cdc.gov/nchs/data/series/sr_03/sr03_43-508.pdf. View at: Google Scholar
  3. Health Systems Research Institute, ““Fall down” major cause of accident that affects the injury of elderly,” May 2018https://www.hsri.or.th/people/media/infographic/detail/5319. View at: Google Scholar
  4. Department of Disease Control, May 2018, http://www.riskcomthai.org/2017/detail.php?id=35499&m=news&gid=1-001-002.
  5. L.-Y. Tsai, S.-L. Tsay, R.-K. Hsieh et al., “Fall injuries and related factors of elderly patients at a medical center in Taiwan,” International Journal of Gerontology, vol. 8, no. 4, pp. 203–208, 2014. View at: Publisher Site | Google Scholar
  6. R. L. Shinmoto Torres, D. C. Ranasinghe, Q. Shi, and A. P. Sample, “Sensor enabled wearable RFID technology for mitigating the risk of falls near beds,” in 2013 IEEE International Conference on RFID (RFID), pp. 191–198, Penang, Malaysia, May 2013. View at: Publisher Site | Google Scholar
  7. A. Wickramasinghe, D. C. Ranasinghe, C. Fumeaux, K. D. Hill, and R. Visvanathan, “Sequence learning with passive RFID sensors for real-time bed-egress recognition in older people,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 4, pp. 917–929, 2017. View at: Publisher Site | Google Scholar
  8. A. Kononova, L. Li, K. Kamp et al., “The use of wearable activity trackers among older adults: focus group study of tracker perceptions, motivators, and barriers in the maintenance stage of behavior change,” JMIR Mhealth Uhealth, vol. 7, no. 4, p. e9832, 2019. View at: Publisher Site | Google Scholar
  9. M. Uddin, W. Khaksar, and J. Torresen, “Ambient sensors for elderly care and independent living: a survey,” Sensors, vol. 18, no. 7, p. 2027, 2018. View at: Publisher Site | Google Scholar
  10. H. Yamaguchi, H. Nakajima, K. Taniguchi, S. Kobashi, K. Kondo, and Y. Hata, “Fuzzy detection system of behavior before getting out of bed by air pressure and ultrasonic sensors,” in 2007 IEEE International Conference on Granular Computing (GRC 2007),, pp. 114–119, Fremont, CA, USA, November 2007. View at: Publisher Site | Google Scholar
  11. Y. Hata, H. Yamaguchi, S. Kobashi, K. Taniguchi, and H. Nakajima, “A human health monitoring system of systems in bed,” in 2008 IEEE International Conference on System of Systems Engineering, pp. 1–6, Singapore, Singapore, June 2008. View at: Publisher Site | Google Scholar
  12. C. L. Wu, Y. W. Chien, and L. C. Fu, “Monitoring bed activities via vibration-sensing belt on bed,” in 2017 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), pp. 85-86, Taipei, Taiwan, June 2017. View at: Publisher Site | Google Scholar
  13. M. B. Pouyan, S. Ostadabbas, M. Farshbaf, R. Yousefi, M. Nourani, and M. D. M. Pompeo, “Continuous eight-posture classification for bed-bound patients,” in 2013 6th International Conference on Biomedical Engineering and Informatics, pp. 121–126, Hangzhou, China, December 2013. View at: Publisher Site | Google Scholar
  14. S. Ostadabbas, M. Baran Pouyan, M. Nourani, and N. Kehtarnavaz, “In-bed posture classification and limb identification,” in 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS) Proceedings, pp. 133–136, Lausanne, Switzerland, October 2014. View at: Publisher Site | Google Scholar
  15. R. Yousefi, S. Ostadabbas, M. Faezipour et al., “A smart bed platform for monitoring & ulcer prevention,” in 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI), pp. 1362–1366, Shanghai, China, October 2011. View at: Publisher Site | Google Scholar
  16. M. Heydarzadeh, M. Nourani, and S. Ostadabbas, “In-bed posture classification using deep autoencoders,” in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 3839–3842, Orlando, FL, USA, August 2016. View at: Publisher Site | Google Scholar
  17. W. Cruz-Santos, A. Beltrán-Herrera, E. Vázquez-Santacruz, and M. Gamboa-Zúñiga, “Posture classification of lying down human bodies based on pressure sensors array,” in 2014 International Joint Conference on Neural Networks (IJCNN), pp. 533–537, Beijing, China, July 2014. View at: Publisher Site | Google Scholar
  18. R. Yousefi, S. Ostadabbas, M. Faezipour et al., “Bed posture classification for pressure ulcer prevention,” in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 7175–7178, Boston, MA, USA, September 2011. View at: Publisher Site | Google Scholar
  19. C. C. Hsia, K. J. Liou, A. P. W. Aung, V. Foo, W. Huang, and J. Biswas, “Analysis and comparison of sleeping posture classification methods using pressure sensitive bed system,” in 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 6131–6134, Minneapolis, MN, USA, September 2009. View at: Publisher Site | Google Scholar
  20. W. Huang, A. A. P. Wai, S. F. Foo, J. Biswas, C. C. Hsia, and K. Liou, “Multimodal sleeping posture classification,” in 2010 20th International Conference on Pattern Recognition,, pp. 4336–4339, Istanbul, Turkey, August 2010. View at: Publisher Site | Google Scholar
  21. C.-C. Hsia, Y.-W. Hung, Y.-H. Chiu, and C.-H. Kang, “Bayesian classification for bed posture detection based on kurtosis and skewness estimation,” in HealthCom 2008 - 10th International Conference on e-health Networking, Applications and Services, pp. 165–168, Singapore, Singapore, July 2008,. View at: Publisher Site | Google Scholar
  22. H. J. Lee, S. H. Hwang, S. M. Lee, Y. G. Lim, and K. S. Park, “Estimation of body postures on bed using unconstrained ECG measurements,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 6, pp. 985–993, 2013. View at: Publisher Site | Google Scholar
  23. M. Cholewa and P. Głomb, “Natural human gestures classification using multisensor data,” in 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pp. 499–503, Kuala Lumpur, Malaysia, November 2015. View at: Publisher Site | Google Scholar
  24. S. L. Videbeck, Nursing Practice for Psychiatric Disorders, Psychiatric-mental Health Nursing, Lippincott Williams & Wilkins, 2012, 446.
  25. S. Gaddam, C. Mukhopadhyay, and G. S. Gupta, “Intelligent bed sensor system: design, experimentation and results,” in 2010 IEEE Sensors Applications Symposium (SAS), pp. 220–225, Limerick, Ireland, February 2010. View at: Publisher Site | Google Scholar
  26. T. Shino, K. Watanabe, K. Kobayashi, K. Suzuki, and Y. Kurihara, “Noninvasive biosignal measurement of a subject in bed using ceramic sensors,” in Proceedings of SICE Annual Conference 2010, pp. 1559–1562, Taipei, Taiwan, August 2010. View at: Google Scholar
  27. S. Nukaya, T. Shino, Y. Kurihara, K. Watanabe, and H. Tanaka, “Noninvasive bed sensing of human biosignals via piezoceramic devices sandwiched between the floor and bed,” IEEE Sensors Journal, vol. 12, no. 3, pp. 431–438, 2012. View at: Publisher Site | Google Scholar
  28. M. Adami, M. Pavel, T. L. Hayes, A. G. Adami, and C. Singer, “A method for classification of movements in bed,” in 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 7881–7884, Boston, MA, USA, September 2011. View at: Publisher Site | Google Scholar
  29. M. Alaziz, Z. Jia, R. Howard, X. Lin, and Y. Zhang, “Motion Tree: a tree-based in-bed body motion classification system using load-cells,” in 2017 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE), pp. 127–136, Philadelphia, PA, USA, July 2017. View at: Publisher Site | Google Scholar
  30. S. Raschka, “About feature scaling and normalization and the effect of standardization for machine learning algorithms,” http://sebastianraschka.com/Articles/2014_about_feature_scaling.html#about-min-max-scaling. View at: Google Scholar
  31. “Bayes’ theorem,” January 2017, http://planetmath.org/BayesTheorem. View at: Google Scholar

Copyright © 2020 Waranrach Viriyavit and Virach Sornlertlamvanich. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1001
Downloads692
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.