Abstract

Owing to the increase in the number of people with disabilities, as a result of either accidents or old age, there has been an increase in research studies in the area of ubiquitous computing and the Internet of Things. They are aimed at monitoring health, in an efficient and easily accessible way, as a means of managing and improving the quality of life of this section of the public. It also involves adopting a Health Homes policy based on the Internet of Things and applied in smart home environments. This is aimed at providing connectivity between the patients and their surroundings and includes mechanisms for helping the diagnosis and prevention of accidents and/or diseases. Monitoring gives rise to an opportunity to exploit the way computational systems can help to determine the real-time emotional state of patients. This is necessary because there are some limitations to traditional methods of health monitoring, for example, establishing the behavior of the user’s routine and issuing alerts and warnings to family members and/or medical staff about any abnormal event or signs of the onset of depression. This article discusses how a layer-based architecture can be used to detect emotional factors to assist in healthcare and the prevention of accidents within the context of Smart Home Health. The results show that this process-based architecture allows a load distribution with a better service that takes into account the complexity of each algorithm and the processing power of each layer of the architecture to provide a prompt response when there is a need for some intervention in the emotional state of the user.

1. Introduction

Over the last few years, there has been an increasing use of technological systems that can assist in healthcare, especially in countries where there are a large number of people with disabilities and reduced mobility or who are elderly [1]. This section of the public tends to prefer to be independent and live alone in their homes. Thus, a residential monitoring system is needed which allows alerts and warnings to be issued so that medical staff and/or family members are aware of the health conditions of the disabled and can intervene whenever necessary [1, 2].

The concept of Health Smart Home (HSH) emerged from a combination of telemedicine, Internet of Things (IoT), and information systems and can be defined as an intelligent home equipped with specialized devices for distance healthcare aid, such as smartphones, and embedded and wearable technologies (e.g., smartwatch) [1, 3]. There is a wide scope for applications in this domain, and for this reason, HSH environments are structured in three separate levels: (i) hardware (sensing and wireless networks), (ii) middleware (capture, security, and data integration), and (iii) services (biological signal processing and other services) [46].

There are several products and concepts that can enhance the quality of life, well-being, and safety of people in an HSH domain. The main objective of these services is to provide benefits for the (a) individual (by increasing safety and well-being), (b) economy (greater cost-effectiveness of limited resources), and (c) society (better living conditions) [7]. Moreover, the analysis of emotional factors in people’s everyday lives has become a significant field of research [1]. Several studies address the question of emotional health to diagnose and prevent a wide range of diseases [811], such as schizophrenia, depression, autism, and bipolar disorder. These illnesses can be caused by the long duration of negative emotions, a lack of emotional expressions, or emotional instability.

Emotional factors play an essential role in the daily life, treatment, and recovery of people who have particular medical needs. However, this raises significant challenges for researchers when conducting an analysis of emotions and being involved in decision-making whenever a critical situation is detected. This situation has been widely explored by several studies in the literature [13, 12], in which technological solutions have been put forward to assist caregivers in looking after people with special needs. However, it should be stressed that the analysis of these factors and the decision-making that follows must be carried out promptly to prevent accidents and assist the users. It is crucial to evaluate the suitability of the environment for these kinds of applications, such as the available resources, which can vary from extremely limited microprocessors in the local environment to unlimited processing capabilities in cloud computing.

The outline of the contributions of this paper relative to the recent literature in the field can be summarized as follows:(i)We propose ENLACE (meaning “communication link” in Portuguese) (EmotioNal Level Architecture Communication hEalthcare), a layer-based architecture recommended for efficiently processing information on emotions for healthcare environments.(ii)We analyse the performance of algorithms for emotion analysis in the context of HSH by taking note of the computational power available from edge computing to the cloud environment.(iii)We explore two algorithms to detect emotional factors (the face analysis and the heart rate analysis, described in Section 4.1) at three different levels: (a) by using a local device, (b) by using a local server, and (c) by cloud computing.(iv)We propose an algorithm for decision-making of the distribution of processing load across layers based on the impact of the emotion recognition service on its processing layers.

The rest of this paper is structured as follows: Section 2 highlights the related work concerning emotion monitoring and how it can be used in HSH environments to meet the desired needs. After examining different viewpoints, Section 4 describes the layer-based architecture (ENLACE) that is adopted for processing and how the algorithms can be exploited to identify emotions as a service. Section 5 explains the methodology adopted for the performance evaluation of the layer-based architecture and presents the results with regard to the processing time in different layers of the architecture. Finally, Section 6 summarizes the conclusions reached from this work and highlights areas that could be explored in the future work.

Studies on the IoT and eHealth/mHealth have generated a wide range of methods for setting environmental and/or user standards. In particular, in the HSH environment, advances in healthcare are due to applications of the intelligent environment as well as artificial intelligence [3]. Hence, these methods are used to monitor and classify factors related to health and well-being, as well as the emotional state of the individual.

With regard to monitoring chronic conditions of the elderly, Mohamed et al. adopted an approach for HSH that relies on an electrocardiographic sensor, an electromyographic sensor, a temperature sensor, and a motion sensor to monitor vital signs of ailments in sick or handicapped patients through integrated systems that collect and transmit data in real time [13]. Although it adopts an IoT approach, the work proposed by Mohamed et al. does not address issues related to the emotions of users, which can have either a positive or negative effect on both the condition and treatment of the patients in a residential environment.

The study by Fernández-Caballero et al. outlines a system to improve the quality of life and care for the elderly who are living at home, by regulating their emotions [14]. The system uses cameras and body sensors to monitor the facial, gestural, and behavioral expressions of the elderly, as well as any physiological data required for the emotion recognition. Furthermore, Castillo et al. set out a very similar system, using an extra microphone for picking up what the user says [15]. However, unlike this work, neither of the other approaches addresses issues related to communication and information processing; this is a shortcoming since these factors can have a considerable influence on the effectiveness of each system.

Mano et al. establish an individualized IoT residential monitoring architecture for the monitoring of emotional states, based on machine learning techniques for the recognition of the patient’s face by means of processing layers [1]. Thus, according to the requisition rate and processing rate, the system adopts layers of higher processing power to identify the patients and assess their emotional state through facial expressions since it is able to issue alerts and warnings with regard to the patient’s emotional state when at home. However, although this work addresses issues related to communication and the processing rate, it does not regard the identification system as a service. In addition, the decision-making presented by Mano et al. is predefined taking into account only the processing for the classification of images, making the statistical system the proposed application, and this restricts its flexibility.

Thus, some works are more focused on the specific IoT infrastructure, not only considering software but also considering networks and hardware. For example, Augusto et al. present a novel, user-centered methodology to develop intelligent environments. The authors focus on integrating technologies and create a software infrastructure to provide services [16]. The challenges addressed by the authors are related to confidence in systems in the real world, such as performance quality, behavioral correctness, and their validation. Similarly, Patel et al. describe an intelligent approach to the analysis of IoT applications and focus on the automation of the transitions between the edge and the cloud in dynamic situations. The authors describe an architecture divided into layers, comprising different components and actors that assume roles in adaptive IoT data analytics solutions [17]. However, in both works, any evaluation is made by the authors. In addition, none of the above-mentioned authors present an evaluation to attest how their solutions impact the infrastructure of intelligent environments.

Table 1 presents an overview of the research found in the literature and detailed above. The research was analyzed considering the following characteristics: (i) proposed HSH environment for health monitoring, (ii) process of emotion analysis for healthcare, (iii) communication between layers of the network, and (iv) decision-making algorithm for processing between layers of the communication network.

Clearly, there are several studies that investigate the health issue in smart homes. These papers present new techniques for finding ways to detect the user’s state through sensors and/or data systems. Such studies are related to the context of HSH in that they assist users in the classification of emotional factors. However, there is a lack of work for the analysis of computational costs incurred by these services. Thus, this study seeks to fill this gap in the literature, presenting a dynamic architecture for emotion monitoring. The established architecture is based on different levels of computational processing; in other words, loading the service requests increases the workload, which is transferred to a level with greater processing power. In addition, we propose a decision-making algorithm for the communication and processing of information between the levels of the architecture, balancing the processing depending on the need of the workload.

3. Knowledge on Emotions

An emotion is a complex reaction that involves the whole organism of individuals since it is closely linked to their needs, goals, values, and general well-being. Several emotion models have been used throughout the history of studies on emotions. The Circumplex Model designed by Scherer (Figure 1) argues that emotions are linear and exist in a continuous two-dimensional space [18], where(i)“valence” corresponds to the type of emotion and represents it as it is felt by a human being (x-axis represents a pleasure-displeasure continuum)(ii)“excitation” corresponds to the intensity of emotion and measures the propensity of human beings to act in a way that is triggered by the emotional state (y-axis represents active or passive, linked to the level of energy or excitement stimulated by the emotion)(iii)“coping potential” evaluates the body’s powers of control over or above a given event (the main diagonal)(iv)“goal attainment” analyzes the ability to evaluate the ease or difficulty in achieving one or more objectives (the secondary diagonal)

Scherer defined emotional behavior as a dynamic process rather than a steady state. The components involved are (1) relevance: importance of the event to an individual and how this can affect him/her, (2) implications: consequences of the event to the individual’s goals, (3) potential to act: how an individual reacts to this event, and (4) normative significance: if the event respects the social norms of the individual. In the case of each component, Scherer defined the intensity of emotional responses since it varies in accordance with the emotions expressed. It should be noted that this representation of emotions, combined with the constant increase in computational processing capacity, is a means of determining the emotions of the users, and this information can provide the basis for decision-making, behavioral analysis, and healthcare.

4. Layer-Based Healthcare Architecture for Emotion Monitoring

In view of the importance of monitoring emotional factors to provide a better quality of life for people and enable them to live more independently, applying HSH is essential, as it includes features that can help to identify and prevent emotional disturbances. This means that services designed to assist the knowledge extraction process must be put into effect as soon as the data are detected, that is, at the edges of the network. However, the devices that are integrated into the HSH environment usually have limited resources, and thus, it is not feasible to use them for certain tasks. It is thus necessary to use the available resources in the cloud to exploit the potential benefits of an HSH environment.

The paradigms of edge and cloud computing should not be mutually exclusive, but complementary. For this reason, it is essential to evaluate and categorize the services that must be provided at each level of the network since factors such as energy consumption, volume of transmission data between layers, and applications’ processing latency should be included in these environments [19, 20]. Hence, we propose a system called EmotioNal Level Architecture Communication hEalthcare (ENLACE). ENLACE is based on using the IoT for intelligent and individualized monitoring of the emotional state of individuals. Sensors are devices that are distributed in the environment and/or owned by the user, and their purpose is to gather data (for example, images and physiological signals) on the conditions required to maintain people’s emotional health.

Initially, the features of the network edge are defined to manage and process collected information. However, their functions can be divided between several decision-makers for scalability purposes. In addition, the edge features serve as an interface between the user and the caregiver/physician/family members. Figure 2 illustrates the proposed architecture for ENLACE.

4.1. Emotions as a Service

“Everything as a Service” is a model that offers services that can meet the specific needs of customers or companies, where “X” represents the main characteristic of the service provided rather than the information technology used [21]. Along these lines, “Emotions as a Service” can be defined as an attempt to establish a communication interface between an object and an object and between a human and an object to process the collected data and generate knowledge, in this case with regard to the individual’s emotional state. These services can be offered through sensor management middleware, such as GSN, which can provide services at both the edge of the network and the cloud [22].

ENLACE was based on the Internet of Things Data as a Service Module (IoTDSM) approach [5]. This approach provides a reference architecture to build a novel IoT interoperable system. The ENLACE’s primary feature is the recognition of the person’s emotion as a service through RESTful API. This service does not depend on the application layer, enabling to execute it in a distinct computer environment (local device, local server, or cloud).

The services offered in this study are as follows:(i)Face analysis (FA): the task undertaken for face identification and facial expression analysis is based on the work carried out by Mano et al. [23] which entails the analysis of geometric characteristics and the approach adopted by the Classification Committee for classifying the emotional state of the user. Figure 3 shows the face-mapping procedure, which is followed at the distances and angles obtained for all possible combinations of points, to show the lines connecting two distinct points with the horizontal axis, which provides a representation with a dimensionality of 1,130 attributes.In turn, the classification module carries out a classification task based on face-mapping attributes. This module aims to use the face-mapping procedure and, through a combination of response values of classification algorithms, identify and categorize the user’s emotion so that computing systems can interact more assertively with the user’s emotional state [3, 23].(ii)Heart rate analysis (HRA): the procedure for heart rate mapping is based on a study by Richter et al. The study evaluates the relationship between heart rate variation (beats per minute (bpm)) and the patient’s emotional state and its connection with health problems, especially hypertension [24]. Richter et al. show that the variation of bpm is related to positive and negative affective frames in the dimension of “valence” (see Figure 1); while an increase in bpm represents negative emotions, positive bpm variations are linked to the positive emotional state. Table 2 presents the values of the bpm variation for emotions presented by Richter et al. and used in this study. It is worth noting that the increase or decrease is considered in the interval of 10 seconds.

Figure 4 illustrates the ENLACE execution flow. Firstly, the face picture and heartbeat (bpm) are captured and sent to the local device. The local device performs the decision-making process and verifies, based on the costs, whether the processing will be performed on the local device, or sent to layers with higher processing power, namely, the local server or cloud (the methodology for decision-making is discussed in Section 4.2). Subsequently, the emotion classification module, consisting of the FA and HRA service, classifies the data and sends the output emotion to the end client. The client’s emotion register can also send it to a doctor or a family member.

4.2. Decision-Making of Load Distribution between Layers

In the ENLACE system, the distribution of processing load across layers is based on the impact of the emotion recognition service on its processing layers. Algorithm 1 presents the decision-making pseudocode for the processing load distribution in ENLACE.

Input: a new block of data D consisting of images or coordinates
(1)while do
(2)  if firstExecution() then
(3)   ;
(4)   ;
(5)   ;
(6)   ;
(7)   ;
(8)   ;
(9)  else
(10)   ;
(11)   ;
(12)   ;
(13)   if then
(14)    ;
(15)    ;
(16)   else if then
(17)    ;
(18)    ;
(19)   else
(20)    ;
(21)    ;
(22)   end
(23)  end
(24)end

In the first execution, that is, the first time, the system starts operating, the services are executed in all environments (local device, local server, and cloud), and the processing costs are obtained using the energy consumption and response time in the execution of the application. Thus, the equation used to estimate the cost of running the application in the cloud is as follows:where is the transmission rate of the network interface, is the device voltage, and is the chain read during a time interval [25, 26]. It should be noted that the execution of the service on the local server and in the cloud must consider the round trip time in the transmissions between the devices. Furthermore, is a constant that is represented by , where is the transmission cost and is the receiving cost. Both and are defined by the device voltage and the electric current. It is worth pointing out that the electric current has different values for transmitting and receiving data because the signal strengths differ [25, 26].

At each new execution, the ENLACE compares the costs of the first run and the estimated values of the new data sample for the offloading decision regarding the processing layer. Thus, the algorithm balances the processing in the ENLACE layers for better performance because the execution time can vary depending on the workload. It is worth mentioning that the decision-making algorithm runs on the local device layer and decides whether the data will be sent to the next layers or whether they will be processed in the current layer.

5. Results and Discussion

This section outlines the results and methodology that was used to evaluate the face and heart rate data analysis services in the HSH environments, subject to variations in the workload that has to be processed. These services can be run either individually or together, at any layer of an HSH architecture.

5.1. Performance Evaluation

To evaluate our model, we use heart rates and images of facial expressions with different emotions to evaluate the performance of the ENLACE architecture; for face analysis, the database used is FACES [27]. The FACES database consists of images of the faces of 171 subjects, 58 young people, 56 middle-aged men and women, and 56 elderly people, expressing the following emotions: joy, repulsion, fear, anger, surprise, and sadness or a neutral state. The MIT-BIH [28] database was used to analyze the heart rate, and it is one of the main databases used for detecting and grouping cardiac arrhythmias. MIT-BIH has monitoring records of 47 subjects for a period of 30 minutes. It is worth noting that both databases used in this study provide open access for researchers.

Table 3 presents the computational capacity of the existing devices in the architecture at different levels—a local device (embedded device), local server, and cloud environment. The above-mentioned embedded devices can include equipment such as a smartwatch and cameras.

Owing to the discrepancy of the computational power that can be found at each layer of an HSH architecture, different workloads were applied to all of them. When generating the workload, different numbers of service requests are made in an instant of time, following a uniform probability distribution to represent the continuous monitoring of this architecture. Table 4 shows the number of requisitions for the workload at each level of ENLACE.

5.2. Experimental Results

To start with, there was an analysis of the requisition rate at different levels of the ENLACE architecture which provided the framework for the emotion-monitoring services (HRA and FA). It is worth noting that FACES [27] and MIT-BIH [28] databases were used in facial expression recognition and heart rate analysis, respectively. Although user monitoring is simulated, the ENLACE processing environment was implemented as expected from an actual environment (see Tables 3 and 4). The results show the tests conducted for each individual service, as well as for the execution of the services together, according to the environment configuration and devices described in Section 5.1.

Figures 57 show the graphical representation of the response time that was needed for each request to run the HRA and FA services at the local device level.

Furthermore, Figures 810 show the graphical representation of the response time of each request when carrying out the HRA and FA services at the local server level.

Finally, Figures 1113 show the graphical representation of the response time of each request to carry out the HRA and FA services at the cloud server level.

5.3. Discussion

In general, it was found that the response time is directly proportional to the request rate, which means that as more requests are made in one minute, the response time becomes more significant. With regard to the request rate values, for a workload rate of 16 requests per minute, the response time is constant, which suggests that the device can handle more requests than the number of requests received. When the request rate increases to 24 requests per minute, the response time increases at a minimal linear rate, which suggests that the number of requests reached is slightly higher than the number of requests met, which means that these requests must wait to be processed. Likewise, the response time for a workload of 32 requests per minute also increases at a linear rate, but with a higher proportion as more requests are made in one minute. A further point is that the response time of the last requests falls at a linear rate because the entire workload (200 requests) is executed and the host can only handle the requests in the queue. The reason why there is a tendency for the response time to increase at a linear rate is that the workload requests are handled in a uniform distribution.

It is essential to determine how each service influenced the response time. Figure 6 shows the graphical representation of the response time needed by each request to execute the HRA. This graph shows that regardless of the workload numbers and the request rate per minute, the response time is low and constant; this can be explained by the features of the algorithm which do not require any intensive processing. On the contrary, the graphical representation of the response time of each request for carrying out the FA service, as shown in Figure 7, shows that the workload rate of 16 and 24 requests per minute has practically the same response time. In contrast, a workload rate of 32 requests per minute causes overhead in the local device by increasing the response time.

The form in Figure 8 resembles that in Figure 5 since it can operate both services, although it uses different levels of request rates. Thus, unlike the response time in the local device (where there is a workload rate of 32 requests per minute which grows on a linear scale), a constant time occurs in the local server with regard to 32 and 64 requests per minute. This suggests that, owing to the greater computational power, the response time to meet the requests is lower than that in the previous scenario. On the contrary, a workload rate of 96 requests per minute causes overhead in the local server and the response time increases on a linear scale. Figures 9 and 10 show the response times of each request when carrying out the HRA and FA services separately. The same behavior was observed at the local device level, where for the HRA service, the response time is low and constant, while the FA service requires more computational resources, thus causing overhead and increasing the response time.

The additional computational power in the cloud server allows a low time constant for a workload rate of 64 and 96 requests per minute, unlike the local server which had a linear growth rate for a workload rate of 96 requests per minute. When there is a workload rate of 128 requests per minute, the response time grows on a linear scale, such as the local device and local server when subjected to a high workload. In a similar way to the previous scenario, the individual response time of each service shows that, for the HRA service, a constant and low response time occurs (Figure 12), while the FA service requires more computational resources that cause overhead in the cloud server and increase the response time (Figure 13).

It is essential to understand the behavior of the algorithms at each layer of an HSH architecture for the decision-making process. The results obtained at each level of ENLACE show that when there is a given service request rate, it is advisable to execute them at a higher level with greater processing power or even split the requests between different layers of the architecture to achieve a more acceptable response time for emotion monitoring in HSH settings.

In summary, the results show that the response time of a request is directly related to the computational power, the features of the algorithm, and the number of requests per minute. As shown in the analysis, the HRA algorithm requires little computational power and, with regard to the number of requests per minute defined in each level of an HSH, is unable to overload the target device. On the contrary, the FA algorithm, which requires more computational resources, behaves in a different way, depending on the number of requests per minute. For this service, the larger the number of requests per minute, the greater the probability of overloading the target device. Moreover, the same number of requests per minute may have a different response time for various levels of an HSH architecture, depending on whether or not an overload occurs. Above all, it should be noted that linear response times are not desirable in HSH since delayed response times can lead to wrong decision-making.

6. Conclusion

There is a wide range of methods, techniques, and instruments that support monitoring in assessing the health of the user. This is because computational systems provide an opportunity to react to the state of health and changes in the behavior of an individual, and this can make computational applications aware of the emotional condition of the user. In this situation, the importance of noting any signs of the users’ emotional responses is worth emphasizing, which can help in their treatment and recovery in their everyday lives and provide an indispensable tool to support HSH environments.

Thus, ENLACE was selected in this article as a layer-based architecture for processing sensor data (e.g., the heart rate and facial features) to detect and classify the user’s emotional state in an HSH environment. The results demonstrate that a layer-based architecture for processing allows a better service load distribution in view of the complexity of each algorithm and the processing power of each layer of the architecture. The results show the algorithm for the heart rate maintains a constant and low response time, while the algorithm for the emotion identification by the face is more susceptible to linear growth depending on the rate of requests in each layer of the architecture. Thus, with regard to the processing power, it can be established which layer of architecture will be most suitable for carrying out the processing since it can provide a response in a timely manner for any kind of intervention.

Finally, it should be noted that, on the basis of the obtained results, there is scope for further studies, and the research question can be explored in greater detail with the aim of improving the efficiency of the proposed architecture. Further studies in the field should include the following: (i) applying the ENLACE architecture in the real environment for monitoring users, (ii) exploring other sensors that affect the user’s emotional state, such as speech, and environmental/behavioral factors, (iii) forming a decision-making module for the distribution of workload so that it corresponds to the complexity of the algorithm and processing power of each level, and (iv) exploring the concept of fog computing and offloading for communication between the architectural layers.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors would like to thank the support from the São Paulo Research Foundation (FAPESP) (grant nos. 2016/14267-7 and 2017/21054-2) and CAPES (PROEX-10095318/M). The authors also would like to thank the support from Instituto de Pesquisas Eldorado for the publication of this study. In addition, they would also like to thank Professor Michael Biehl from the University of Groningen.