Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2016 (2016), Article ID 4789803, 20 pages
Research Article

User Adaptive and Context-Aware Smart Home Using Pervasive and Semantic Technologies

1Intelligent Systems Content and Interaction Laboratory, National Technical University of Athens, Iroon Polytexneiou 9, 15780 Zografou, Greece
2Department of Cultural Technology and Communication, University of the Aegean, Mytilene, Lesvos, Greece
3Department of Informatics, Ionian University, Corfu, Greece

Received 17 January 2016; Revised 6 July 2016; Accepted 17 July 2016

Academic Editor: John N. Sahalos

Copyright © 2016 Aggeliki Vlachostergiou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Ubiquitous Computing is moving the interaction away from the human-computer paradigm and towards the creation of smart environments that users and things, from the IoT perspective, interact with. User modeling and adaptation is consistently present having the human user as a constant but pervasive interaction introduces the need for context incorporation towards context-aware smart environments. The current article discusses both aspects of the user modeling and adaptation as well as context awareness and incorporation into the smart home domain. Users are modeled as fuzzy personas and these models are semantically related. Context information is collected via sensors and corresponds to various aspects of the pervasive interaction such as temperature and humidity, but also smart city sensors and services. This context information enhances the smart home environment via the incorporation of user defined home rules. Semantic Web technologies support the knowledge representation of this ecosystem while the overall architecture has been experimentally verified using input from the SmartSantander smart city and applying it to the SandS smart home within FIRE and FIWARE frameworks.

1. Introduction

Although in their initial definition and development stages pervasive computing practices did not necessarily rely on the use of the Internet, current trends show the emergence of many convergence points with the Internet of Things (IoT) paradigm, where objects are identified as Internet resources and can be accessed and utilized as such. In the same time, the Human-Computer Interaction (HCI) paradigm in the domain of domotics has widened its scope considerably, placing the human inhabitant in a pervasive environment and in a continuous interaction with smart objects and appliances. Smart homes that additionally adhere to the IoT approach consider that this data continuously produced by appliances, sensors, and humans can be processed and assessed collaboratively, remotely, and even socially. In the present paper, we try to build a new knowledge representation framework where we first place the human user in the center of this interaction. We then propose to break down the multitude of possible user behaviors to a few prototypical user models and then to resynthesize them using fuzzy reasoning. Then, we discuss the ubiquity of context information in relation to the user and the difficulty of proposing a universal formalization framework for the open world. We show that, by restricting user-related context to the smart home environment, we can reliably define simple rule structures that correlate specific sensor input data and user actions that can be used to trigger arbitrary smart home events. This rationale is then evolved to a higher level semantic representation of the domotic ecosystem in which complex home rules can be defined using Semantic Web technologies.

It is thus observed that a smart home using pervasive and semantic technologies in which the human user is in the center of the interaction has to be adaptive (its behavior can change in response to a person’s actions and environment) and personalized (its behavior can be tailored to the user’s needs and expressed using more advanced and complex home rules). In the case of smart homes, the user’s acceptance has become one of the key factors to determine the success of the system. If the home system aims to be universally usable, it will have to accommodate a diverse set of users [1] and adjust to fulfill their needs in case they change. With the aim of helping practitioners to improve their user modeling techniques, some researchers have established rules to follow, for example, the set of user modeling guidelines for adaptive interfaces created by [2]. A context-sensitive smart home should reckon dynamically to accommodate the needs of users, taking into account a wide range of users and context or behavior situations. This user-centric functioning of smart home systems has to be supported by an adequate user model. The intelligence and interface of the system have to be aware of the user abilities and limitations to interact with the person properly. The user model must include information about the person’s cognitive level and sensorial and physical disabilities.

To be more precise, a user model [3] is a computational representation of the information existent in a user’s cognitive system, along with the processes acting on this information to produce an observable behavior. User stereotype or persona is a quite common approach in UM due to its correlation with the actors and roles used in software engineering systems and its flexibility, extensibility, reusability, and applicability [4]. The “personas” concept was originally introduced by Cooper in [5], where, according to his definition, “personas are not real people, but they represent them throughout the design process. They are hypothetical archetypes of actual users.” There are two different types of personas: primary personas, which represent the main target group, and secondary personas, which can use the primary personas’ interfaces but have specific additional requirements [6, 7]. Even though personas are fictional characters, they need to be created with rigor and precision; they tell stories about potential users in ways that allow designers to understand them and what they really want. Characteristics like name, age, profession, or any other relevant information are given to each persona in order to make them look more realistic or “alive.” The most accurate way of creating personas, also known as “cast of characters,” is to go through a phase of observation of real users within the environment in which the system will exist and eventually interview them with the intention of finding a common set of motivations, behaviors, and goals among the end-users. However, this method is expensive and time-consuming. A low-cost approach is to create them based on Norman’s assumption personas [8] where designers use their own experience to identify the different user groups. Thus, in the same way, in our work, the personas technique fulfills the need of mapping and grouping a huge number of users based on the profile data, aims, and behavior which can be collected during both design time and run time and users and usage design, respectively.

Recently, the emergence of ubiquitous or pervasive computing technologies that offer “anytime, anywhere, anyone” computing by decoupling users from devices has introduced the challenge of context-aware user modeling. So far, most of the context-aware systems focus on the external context known as physical context which refers to context data collected by physical sensors. Thus, they involve context data of physical environment, distance, temperature, sound, air pressure, lighting levels, and so forth. The external context is important and very useful for context-aware systems, as context-aware systems provide recommended services. However, from a broader scope, context may be considered as information used to characterize the situation of an entity [9]. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including location, time, activities, and the preferences of each entity. A user model is context-aware if it can express aspects of the user’s contextual information and subsequently help the system adapt its functionality to the context of use. Many aspects of contextual information used in modeling are discussed in [10, 11]. Nevertheless, to provide personalized services to user models according to user preferences, task, and emotional state of user, the cognitive domains such as situational monitoring are needed; so far, a few authors have addressed utilizing the cognitive elements of a user’s context and the semantics of the relations between the user and the system’s entities. Several researchers have proposed models to capture the internal elements of context. Our proposed model differs from many of the previous approaches, as it focuses on extracting a user’s cognitive activities, rather than extracting the user’s movement based on physical environment. Cognitive context information given through a semantic formalization idea is key information to satisfy users by providing personalized context-aware computing services.

The semantic formalization idea is to provide a functional ontological and reasoning platform that offers unified data access, processing, and services on top of the existing IoT-A ubiquitous services and to integrate heterogeneous home sensors and actuators in a uniform way. From an application perspective, a set of basic services encapsulates sensor and actuators network infrastructures hiding the underlying layers with the network communication details and heterogeneous sensor hardware and lower-level protocols. A heterogeneous networking environment indeed calls for means to hide the complexity from the end-user as well as applications by providing intelligent and adaptable connectivity services, thus providing an efficient application development framework. Thus, to face the coexistence of many heterogeneous sets of things and home appliances, a common trend in IoT applications is the adoption of an abstraction layer capable of harmonizing the access to the different devices with a common language and procedure [12]. Our approach is to further encapsulate this abstraction layer into “if then that” rule sets and then to OWL ontologies that combined with home rules defined in Semantic Web Rule Language (SWRL) form the domotic intelligence that continuously adapts home environment conditions to the user’s actions and preferences.

The scope of most of the applications or services with respect to smart homes so far has focused on the concept of small regions like laboratory, school, hospital, smart room, and so forth. Furthermore, algorithmic and strategic models for gaining the revenue by using context-aware systems are very few. Additionally, technologies related to context-aware systems are merely standardized. The architecture, the context modeling method, the algorithm, and the network implementation as well as the devices of users in each project are different. Moreover, middleware, applications, and services make use of different level of contexts and adapt the way they behave based on the current context. Therefore, according to the level and type of contexts along with the goal of context-aware systems, the context modeling process, the inference algorithm, and interaction method of personas (humans known as personas for computational representation purposes) are changed. Although the interaction between personas and cooperation between components of the same architecture have been investigated, standard interaction, cooperation, and operation in the different context-aware systems have not been studied. Thus, the novelty of our proposed approach is to provide a common context-aware architecture system in which the user (“eahouker” in SandS) is able to control his household appliances in a collective way via the SNS (Social Network Service) and in an intelligent way via the adaptive social network intelligence. As our system is human-centered, the UM (user modeling) is related to the user’s activity inside the ESN (Eahoukers Social Network), while the context-aware environment refers to the contextual information that characterizes the situation and conditions of the system’s entities.

Finally, the modeling of the contextual information is completed through the capture of the semantics of the relationships between the user and the various entities of the ecosystem (other users, appliances, and recipes) to further improve the overall user experience. The semantic description framework of our proposed approach is based on a number of home rules that are defined for a specific household and eahouker. Since the SandS architecture consists of two layers, high and low, respectively, we have on the one hand recipes for common household tasks produced and exchanged in the SandS Social Network that are described in near-natural language. Additionally, on the other hand, we have every user’s context which consists of the actual appliances that the user has in house with their particular characteristics (type, model, brand, etc.). Finally, to ensure the executability and compatibility of a recipe and to deal as well with any uncertainty and vagueness in modeling the contextual information, a number of some axioms, to enforce constraints to all objects (things in IoT paradigm) of the ecosystem, have been introduced in the proposed Web Ontology Language (OWL) that was adopted. To conclude, the experimental results for the above framework are presented which have been conducted inside the “Social & Smart” (SandS) [13] FP7 European Project which aims to highlight the potential of IoT technologies in a concrete user-centric domestic pervasive environment. Large-scale experiments are planned at SmartSantander [14], a city-scale experimental research facility in support of typical applications and services for a smart city, comprising a very large number of online ambient sensors inside a real-life human environment.

2. User Modeling

2.1. Related Work

As correctly stated in [15], user modeling is the process through which systems gather information and knowledge about users and their individual characteristics. Therefore, a user model is considered a source of information about the user of the system which contains several assumptions about several relevant behavior or adaptation data. Approaching user modeling from the HCI perspective, there is the potential that user modeling techniques will improve the collaborative nature of human-computer systems. During the last 20 years, there has been a lot of work done in this area. Authors attempted to cover all possible scenarios through the development of different definition for users and user modeling approaches, respectively.

Reviewing how “user models” term has been approached, within the HCI literature, it is indicated that users are part of an enlarged communication group in which users change through time and according to the environmental conditions and the experience they gain. Thus, in the end, there are three types of users: “novel,” “intermediate,” and “expert” [15]. Another more oriented work is that of [16], as it focuses on the specific group of elderly people with none, one, or more than one disability, whose needs and capabilities change as they grow older, underlying the need for having more diverse and dynamic computing systems for modeling users. A few years later, in terms of maintaining rich and adaptive output information, ontology-based approaches have been used for the design of the Ec(h)o audio reality system for museums to further support experience design and functionality related to museum visits, through user models. This work has been later extended [17] by incorporating rich contextual information such as social, cultural, historical, and psychological factors related to the user experience.

Within the area of multimedia content, the work presented in [18] is the first to introduce a triple layered sensation-perception-emotion user model to evaluate the experience in a video scenario. In this work, low-level characteristics such as light variation are combined with the knowing and learning cognition process and emotions for entertainment product designs. In a similar way, in [19], the authors consider four crucial parameters for the interaction between people and technology: the user, the product, the contextual environment, and the tasks to specify the interaction process.

Based on ontology approaches to characterize users capabilities within adaptive environments, in 2007, the GUMO ontology has been proposed [20] which takes into account the emotional state, the personality, the physiological state of the user, and particularly stress. Five years later, Evers and his colleagues [21] implemented an automatic and self-sufficient adaptation interface to measure the user’s stress levels. Finally, in 2004, the research in user modeling has started to shift from focusing not on users capabilities but on users’ needs. This work has incorporated the “persona” concept [22], which has been introduced to distinguish between different user groups within an adaptive user interface domain. These “persona” concepts have been proved really useful as a wide range of potential users could be covered by assigning random values to characteristics like age, education, profession, family conditions, and so forth. It is thus observed that, from product design to multimedia and user interfaces adaptation, the approaches described above, even though the collected personal data characteristics to improve the system and user’s satisfaction and product or service usability differ a lot, share the same goal. For a more extended review, the reader is directed to [23].

Typically, a user model represents a collection of personal data associated with a specific user of a system. Following a similar definition, a user model [3] is a computational representation of the information existent in a user’s cognitive system, along with the processes acting on this information to produce an observable behavior. Thus, the act of user modeling identifies the users of the application and their goals for interacting with the application. As a result, a user model is considered to be the foundation of any adaptive changes to the system’s behavior. The main question to answer when dealing with this kind of information is which data is included in the model; as it is expected, the type of data used depends on the purpose of each application and the domain where the latter is applied. A user model can in principle include personal information, such as users’ names and ages, their interests, their skills and knowledge, their goals and plans, their preferences, and their dislikes or data about their behavior and their interactions with the system.

As one may expect, there are also different design patterns for user models, though often a mixture of them is used [24]. In an attempt to describe a system’s users in the most relevant way, one may start from the humble “actor,” which provides a common name for a user type. In use case modeling, actors are people who interact with the system and they are often described using job titles or a common name for the type of user. On the other hand, a “role” names a relationship between a user type and a process or a software tool. A user role generally refers to a user’s responsibility when using an application or participating in a business process. To help us understand the characteristics of our users that might have bearing on our design, we may then construct a “profile,” containing information about the type of user relevant to the application being created. Still, user profiles contain general characteristics about the groups of users. User stereotype or “persona” is a quite common approach in UM due to its correlation with the actors and roles used in software engineering systems and its flexibility, extensibility, reusability, and applicability [4].

A persona is an archetypal user that is derived from specific profile data to create a representative user containing general characteristics about the users and user groups and is used as a powerful complement to other usability methods, whereas it is more tangible, less ambiguous, easier to envision, and easier to empathize with. The use of personas is an increasingly popular way to customize, incorporate, and share the research about users [25]. The personas technique fulfills the need of mapping and grouping a huge number of users based on the profile data, aims, and behavior which can be collected during both design and run time and users and usage design, respectively.

Personas development supports the design process by identifying and prioritizing the roles and user characteristics of a system’s key audience. In the general case, personas development is initiated by introducing assumptions about user profiles, based on data from initial research steps conducted. Through interviews and observation, researchers expand and validate the profiles by identifying goals, motivations, contextual influences, and typical user stories for each profile. Having such a fictional person (persona) representing a profile grounds the design effort in so-called “real users.” For each persona, the user modeling description typically includes key attributes and user characteristics, such as name, age, and information that distinguishes each persona from others.

2.2. Basic Characteristics

The herein proposed approach for modeling user information following a personas-based inspiration is discussed within this subsection. More specifically, according to the notation followed within our system, the so-called “eahouker profile” () is a set of properties of the system’s users (“eahoukers,” ) that can be exploited for determining eahoukers with similar characteristics. These properties are stored in a database, that is, the Eahoukers Social Network’s Database (EDB), and are continuously updated. The profile contents are rather static in the sense that the information is present in the database when the eahouker joins the SandS system and seldom changes in everyday activities. The interested reader should at this point note that a quasistatic approach would have been more accurate, since a number of user attributes, like, for instance, a user’s marital status and the number of children she/he may have, can change over time. Basic information about the user is also included in the profile and consists of gender, age, number of children, social status, and his/her house appliances and geographical position.

In a more formal manner, the profile of an eahouker , denoted by , contains the following information about the user: , where is the gender of , is the age of , denote the number of children of , is a string describing the city of , is the house role of , and corresponds to the marital status of . Considering the above user profile definition at hand, the semantic description framework of the eahoukers can be directly interfaced and queried, but more importantly it enables us to define a personas-based user similarity measure. The latter is considered to outperform a traditional rating-based user similarity measure and is described in the following.

As a last point to consider and in order to further illustrate the herein proposed approach, we provide an example of a typical eahouker persona: the Papadopoulos family composed of four family members, namely, the parents, John and Maria, and their children, Nikos and Ioanna. Their household is located in Athens, Greece, and it contains five smart household appliances:(1)A Samsung 55′′ TV set, model UN55F6300(2)An AEG washing machine, model AEG L60260(3)A Nescafe coffee machine, model KP1006(4)An LG refrigerator, model LFX31995ST(5)A GE bread maker, model GE106732Potential users are of course ; however, as rather obvious, Nikos and Ioanna are not allowed to interact directly with the above devices apart from the TV. Following the above notation, their profiles are modeled as follows: (i), , (ii), , (iii), (iv),

2.3. Fuzzification

Let us consider a set of eahoukers that interact with information objects and a set of meanings that can be found or referred to in items. Within our approach, is described as a set of semantic entities that the eahouker has interest in to varying degrees. This interpretation provides fairly precise, expressive, and unified representational grounding, in which both user interests and content meaning are represented in the same space, in which they can be conveniently compared [26].

In addition, the use of ontologies for capturing knowledge from a domain of interest has grown significantly lately; thus, we also consider a domain ontology herein. According to one of the core ideas of the Semantic Web, that is, that of sharing, linking, and reusing data from multiple sources, the availability of semantically described data sources and thus the uptake of Semantic Web technologies is important to applications in which rich domain descriptions can play a significant role. Still, considering the inherent complexity of a decent knowledge representation formalism (e.g., Web Ontology Language (OWL) [27]), convincing domain experts and thus potential ontology authors of the usefulness and benefits of using ontologies is one of the major barriers to broader ontology adoption [28].

Efficient user model representation formalism using ontologies [29, 30] presents a number of advantages. In the context of this work, ontologies are suitable for expressing user modeling semantics in a formal, machine-processable representation. As an ontology is considered to be “a formal specification of a shared understanding of a domain,” this formal specification is usually carried out using a subclass hierarchy with relationships among classes, where one can define complex class descriptions (e.g., in Description Logics (DLs) [29] or OWL).

As far as a relevant mathematical notation is concerned, given a universe of eahoukers , one may identify two distinct sets of concepts, namely, a crisp (i.e., nonfuzzy) set and a fuzzy set. The crisp set of concepts on may be described by a membership function , whereas the actual crisp set may be defined as , . Quite similarly, a fuzzy set on may be described by a membership function . We may describe the fuzzy set using the well-known sum notation for fuzzy sets introduced by Miyamoto [31] aswhere , , is the well-known cardinality of the crisp set and , or more simply , is the membership degree of concept . Consequently, (1) for a concept can be written equivalently as

Apart from the above described set of concepts, we need to introduce and illustrate a set depicting potential relations between the aforementioned concepts. Thus, we introduce to be the crisp set of fuzzy relations defined as and discussed within Section 2.4.

2.4. Fuzzy Personas Similarity

In order to define, extract, and use a set of concepts, we rely on the semantics of their fuzzy semantic relations. As discussed in Section 2.3, a fuzzy binary relation on is defined as a function . The inverse relation of relation , , is defined as , following the prefix notation for fuzzy relations. The definitions of the intersection, union, and sup- composition of any two fuzzy relations and on the same set of concepts are given by equationswhere and are a fuzzy -norm and a fuzzy -conorm, respectively. The standard -norm and -conorm are the and functions, respectively, but others may be used if considered more appropriate. The operation of the union of fuzzy relations can be generalized to relations. If are fuzzy relations in , then their union is a relation defined in such that, for all , . A transitive closure of a relation is the smallest transitive relation that contains the original relation and has the fewest possible members. In general, the closure of a relation is the smallest extension of the relation that has a certain specific property such as the reflexivity, symmetry, or transitivity, as the latter are defined in [32]. The sup- transitive closure of a fuzzy relation is formally given by equationwhere and .

Based on the relations , we construct a combined relation : where the value of is determined by the semantics of each relation used in the construction of . The latter may take one of three values, namely, , if the semantics of imply it should be considered as is; , if the semantics of imply its inverse should be considered; and , if the semantics of do not allow its participation in the construction of the combined relation . The transitive closure in (6) is required in order for to be taxonomic, as the union of transitive relations is not necessarily transitive, independently of the fuzzy -conorm used. In the above context, a fuzzy semantic relation defines, for each element , the fuzzy set of its ancestors and its descendants. For instance, if our knowledge states that “LG refrigerator” is produced before “Samsung TV” and “Nescafe coffee machine” is produced before “Samsung TV,” it is not certain that it also states that “LG refrigerator” is produced before “Nescafe coffee machine.” A transitive closure would correct this inconsistency.

Last but not least, thing to consider in our approach is the actual selection of meaningful relations to consider for the production of combined relation . has been generated with the help of fuzzy taxonomic relations, whose semantics are derived primarily from both the MPEG-7 standard and the specific user requirements. The utilized relations are summarized within Table 1. This approach is ideal for the user modeling interpretation followed herein because when dealing with generic user information, focus is given to the semantics of high-level abstract concepts.

Table 1: Semantic relations used for generation of combined relation .

It is worth noticing that all relations depicted within Table 1 are traditionally defined as crisp relations. However, in this work, we consider them to be fuzzy, where fuzziness has the following meaning: high values of , for instance, imply that the meaning of approaches the meaning of , while as decreases, the meaning of becomes narrower than the meaning of . A similar meaning is given to fuzziness of the rest of the semantic relations of Table 1 as well. Based on the fuzzy roles and semantic interpretations of , it is easy to see that relation 8 combines them in a straightforward and meaningful way, utilizing inverse functionality where it is semantically appropriate. More specifically, in our implementation relation utilizes the following subset of relations:

Relation is of great importance, as it allows us to define, extract, and use contextual aspects of a set of concepts. All relations used for its generation are partial taxonomic relations, thus abandoning properties like synonymity. Still, this does not entail that their union is also antisymmetric. Quite the contrary, may vary from being a partial taxonomic to being an equivalence relation. This is an important observation, as true semantic relations also fit in this range (total symmetricity as well as total antisymmetricity often has to be abandoned when modeling real-life relationships). Still, the taxonomic assumption and the semantics of the used individual relations, as well as our experiments, indicate that is “almost” antisymmetric and we may refer to it as (“almost”) taxonomic. Considering the semantics of the relation, it is easy to realize that when the concepts in a set are highly related to a common meaning, the context will have high degrees of membership for the concepts that represent this common meaning. Understanding the great importance of the latter observation, we plan to integrate such contextual aspects of user models in our future work.

As observed in Figure 1, concepts household appliance and eahouker are the antecedents of concepts household and appliance manufacturer in relation , whereas concept eahouker is the only antecedent of concept recipe.

Figure 1: Concepts and relations example.

So far and in compliance with the notion introduced in [33], the herein introduced fuzzy ontology will contain both concepts and relations and may be formalized using the crisp set of concepts described by the ontology and the crisp set of fuzzy semantic relations amongst these concepts, , as

In order for us to provide a measure for the evaluation of similarity between two eahoukers’ profiles, we first need to establish an evaluation of similarity for each profile component. In the following, we define a set of functions , one for each attribute of the eahouker’ profile.

User Profile Similarity Functions (i)Two eahoukers are considered identical if their gender, city, role in the house, and marital status are the same. This property is expressed through functions , , , and that are collectively represented in user profile similarity functions as .(ii)Two eahoukers are considered identical if their difference of age is less than 5 years. Indeed, their behavior and habits inside the house can be considered the same even if they have a slight difference of age. For example, two people, one at the age of 30 and one at the age of 32, probably would have the same behaviors, according to their age. On the other hand, a person at the age of 30 would have quite different behaviors from a person at the age of 50 or 60. This property is expressed by the function .(iii)Finally, two eahoukers are considered identical if they have more or less the same number of children. For example, a parent with 3 children would have similar behaviors and demands to a parent of 4 children. This property is expressed by the function in user profile similarity functions.

Having introduced the functions for the evaluation of profile similarity, we can define a function that uses these evaluations to provide the level of similarity of two eahoukers. Let denote the th attribute of . In addition, let and be the profiles of eahoukers and , respectively. The eahouker profile similarity function is then defined as follows:where is actually the cardinality of (which equals six in the herein presented use case example).

3. Context

3.1. Related Work

Filling a home with sensors and controlling devices by a computer are nowadays not only possible, but also common. Sensors are available off the shelf which localize movement in the home, provide readings for light and temperature levels, and monitor usage of doors, phones, and appliances. Small inexpensive sensors are attached to objects not only to register their presence but also to record histories of recent social interactions [34].

As social interaction is an aspect of our daily life; social signals have long been recognized as important for establishing relationships, but only with the introduction of sensed environments where researchers have become able to monitor these signals. Hence, it is possible to look at socialization within the smart home and cities (such as entertaining guests, interacting with residents, or making phone calls) and examine the correlation between the socialization parameters and productivity, behavioral patterns, or even health. These results will help researchers not just to understand social interactions but also to design products and behavioral interventions that will promote more social interactions.

Proliferation of sensors in the home results in large amounts of raw data that must be analyzed to extract relevant information. Most smart home data from environmental sensors can be processed with a small computer. Once data is gathered from wearable sensors and smartphones (largely accelerometers and gyroscopes, sometimes adding camera, microphone, and physiological data), the amount of data may get too large to handle on a single computer, and cloud computing might be more appropriate. Cloud computing is also useful if data are collected for an entire community of smart homes to analyze community-wide trends and behaviors.

Collecting and handling with concurrently enormous ubiquitous data, information, and knowledge that have different formats within the SmartSantander [14] are a hard task. According to the level of abstraction of context-aware systems in HCI, context is divided into low-level context and high-level context, respectively. The raw data of low-level context are usually gathered from different physical sensors. Data type, formats, and abstraction level from different physical sensors are different. Devices and physical sensors of context-aware systems use various scales and units, and low-level context has different elements. Context-aware systems store data, information, and knowledge that have different relationship, format, and abstraction level in the context base. Furthermore, context-aware systems collect context history storing sensor data over time to offer proactive service. Context history stores huge amount of data on location, temperature, lighting level, task, utilized devices, selected services, and so forth. To quickly provide suitable services to users, context-aware systems should manage variety, diversity, and numerous amounts of context. However, previous research suggested only a concept to control this problem. Therefore, our methodology ensures semantic interoperability by bridging the gap between the expressively rich natural language vocabulary used in the recipes and the low-level machine-readable instructions with very precise and restricted semantic content.

3.2. Context-Aware HCI

In everyday social contextual situations, humans are able to, in real time, perceive, combine, process, respond to, and evaluate a multitude of information including semantics meaning of the content of an interaction, nonverbal information such as facial and body gestures, subtle vocal cues, and context, that is, events happening in the environment. Multimodal cues unfold, sometimes asynchronously, and continuously express the interlocutors’ underlying affective and cognitive states, which evolve through time and are often influenced by environmental and social contextual parameters that entail ambiguities. These ambiguities with respect to contextual aspect range from the multimodal nature of emotional expressions in different situational interactional patterns [35], the ongoing task [36], the natural expressiveness of the individual, and his/her personality [37] to the intra- and interpersonal relational context [38, 39]. Additionally, in human communication, the literature indicates that people evaluate situations based on contextual information such as past visual information [40], general situational understanding, past verbal information [41], cultural background [42], gender of the participants, knowledge of the phenomenon that is taking place [36], discourse and social situations [43], and personality traits under varied situational context [44]. Without context, even humans may misinterpret the observed affective cues such as facial, vocal, or gestural behavior.

Understanding that the human behavior in terms of decision-making process is inherently a multidisciplinary problem involving different research fields, such as psychology, linguistics, computer vision, and machine learning, there is no doubt that the progress in machine understanding of human interactive behavior and personality is contingent on the progress in the research in each of those fields.

Attempting to provide a formal definition for context-aware applications and Human-Computer Interaction (HCI) systems, a starting point would be to investigate how the term context has been defined. The word “context” has a multitude of meanings even within the field of Computer Science (CS). To illustrate this, we group the different definitions of the term context in the area of artificial intelligence, natural language processing, image analysis, and mobile computing, where every discipline has its very own understanding of what context is.

According to the first work which introduced the term context awareness in CS [45], the important aspects of context are as follows: who you are with, when, where you are, and what resources are nearby. Thus, context-aware applications look at the who, where, when, and what (the user is doing) entities and use this information to determine why the situation is occurring. In a similar definition, Brown et al. [36] define context as location, identities of the people around the user, the time of day, season, temperature, and so forth. Other approaches such as that of Ryan et al. [46] include context as the user’s location, environment, identity, and time while others have simply provided synonyms for context, for example, referring to context as the environment [47] or situation [48]. However, to characterize a situation, the categories provided by [45] have been extended to include activity and timing of the HCI. Reference [49] views context as the state of the application’s surroundings and [50] defines it to be the application’s setting. Reference [51] included the entire environment by defining context as the aspects of the current situation. However, even though there has been a development in the area, both definitions by example and those which use synonyms for context are extremely difficult to apply in practice. For a more extended overview on context awareness, the reader is referred to [52].

Based on context’s broader approach [52], context can be formalized as a combination of four contextual types, identity, time, location, and activity, which are the primary context types for characterizing the situation of a particular entity and also act as indices to other sources of contextual information.

With an entity’s location, we can determine what other objects or people are near the entity and what activity is occurring near the entity. From these examples, it should be evident that the primary pieces of context for one entity can be used as indices to find secondary context (e.g., geolocalization) for that same entity as well as primary context for other related entities (e.g., proximity to other homes). This context model was later enlarged [9] to include an additional context type called Relations, to define dependencies between different entities (information specific to the social network itself). Relations describe whether the entity is a part of a greater whole (multiparty interactions within Brown’s family) and how it can be used in the functionality of some other entities. Recently, the term Relations has been used to refer to the relation between the individual and the social context in terms of perceived involvement [35] and to the changes detected in a group’s involvement in a multiparty interaction [43].

Identity specifies personal user information like gender, age, children, social and marital status, and so forth. Time, in addition to its intuitive meaning, can utilize overlay models to depict events like working hours, holidays, days of week, and so on. Location refers either to geographical location or to symbolic location (e.g., at home, in the shop, or at work). Activity relates to what is occurring in the situation. It concerns both the activity of the entity itself and the activities in the surroundings of the entity.

For real-world context-aware HCI computing frameworks, context is defined as any information that can be used to characterize the situation that is relevant to the interaction between the users and the system [45]. Thus, this definition approaches better the understanding of human affect signals. An even more suitable definition is the one that summarizes the key aspects of context with respect to the human interaction behavior (who is involved (e.g., dyadic/triadic interactions among persons), what is communicated (e.g., “recipes” to perform a specific task), how the information is communicated (the person’s cues), why, that is, in which context, the information is passed on, where the proactive user is, what his current task is, and which (re)action should be taken to participate actively in content creation [53]).

All these context-aware systems that model the relevant context parameters of the environment depend on the application domain and hence face difficulties in modeling context in an independent way and also lack of models to be compared. Setting aside the fact that sometimes the domains such as context-aware computing, pervasive environments, and Ubiquitous Computing entail similarities with respect to the necessity of managing context knowledge, the concrete applications and approaches domains are different. In the area of pervasive computing, the work of [54] refers to context in environments taking into account the user’s activity, the devices being used, the available resources, the relationships between people, and the available communication channels. To allow developers to consider richer information as activities and abstract knowledge about the current global context and to model specific knowledge of the current subdomain, an ontology-based approach has been proposed [55] in which context information is modeled into two separate layers (high and low level, resp.). Modeling high-level information allows performing deeper computations taking into account behavioral characteristics, trends information, and so forth. On the other hand, modeling low-level information, such as location, time, and environmental conditions, is used to achieve the system’s final goal which is the adaptation to the user interface. Besides, several approaches consider user-related characteristics to fulfill their purposes. For example, Schmidt and his colleagues [56] also remark social environments as relevant for context modeling. Another interesting point highlighted in this work is the user’s tasks. This topic has been studied also in the past [52, 54, 57] where the aspect of activities has been used to enrich contextual information about the user. Nevertheless, as it occurs with user information, sometimes the collected data might lead to misunderstandings. In [58], ambiguity and uncertainty of user data are attempted to be solved through an ontology-based process which allows modeling them within a smart environment. A related work that deals with the uncertainty of context data in intelligent applications [59] extends the OWL web ontology language, with fuzzy set theory, to further capture, represent, and reason with such type of information. For a more extended review on representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web, the reader is referred to [60].

Unfortunately, such ambiguities with respect to the human behavior data understanding are usually context independent due to the fact that the human behavioral signals are easily misinterpreted if the information about the situation in which the shown behavioral cues have been displayed is not taken into account. Thus, to date, the proposed methodology has approached one or more of the above presented contextual aspects either separately or in groups of two or three using the information extracted from multimodal input streams [37]. Overall, further research is needed in approaching this contextual information in a continuous way.

3.3. Ubiquitous Contextual Information

An issue related to the use of data collected continuously [61] is that both psychologists and engineers tend to acquire their data in laboratories and artificial settings [62], to elicit explicitly the specific phenomena they want to observe. However, this is likely to simplify excessively the situation and to improve artificially the performance of the automatic approaches. For the last 20 years, well-established datasets and benchmarks have been developed for automatic affect analysis. Nevertheless, there are some important problems with respect to the analysis of facial behavior, such as (a) estimation of affect in continuous dimensional space (e.g., valence and arousal) in videos displaying spontaneous facial behavior and (b) detection of the activated facial muscle. That is, the majority of the publicly available corpora for the above tasks contain samples that have been captured in controlled recording conditions and/or captured under a specific social contextual environment. Arguably, in order to make further progress in automatic analysis of affect behavior, datasets that have been captured in the wild and in various contextual social environments have to be developed.

Recently, many face analysis research works have gradually shifted to facial images captured in the wild with the introduction of Labelled Faces in the Wild (LFW) [63], FDDB for face detection [64], and 300-W series of databases for facial tracking [65, 66]. To a great extent, the progress we are currently witnessing in the above face analysis problem is largely attributed to the collection and annotation of “in-the-wild” datasets. The contributions of the already developed datasets and benchmarks for analysis of facial expression in the wild have been demonstrated during the challenges in Representation Learning (ICML 2013) [67], in the series of Emotion Recognition in the wild challenges (EmotiW 2013, 2014, 2015 [61, 6870], and 2016 ( and in the recently organized workshop on context-based affect recognition (CBAR 2016 ( For a more extended overview on datasets collected in the wild, the reader is referred to [71].

Aligned with the aforementioned trend of collecting contextual data in nonstandard situations (in the wild), there also has been much work in creating large-scale semantic ontologies and datasets. Typically, such vocabularies are defined according to utility for retrieval, coverage, diversity, availability, and reusability. Moreover, semantic concepts such as objects, locations, and activities in visual data can be easily automatically detected [72]. Recent approaches have also turned towards semantic concept-level analysis approaches.

Nevertheless, not all of them are full of rich meta information such as the entities involved, the situational context, the demographic aspects, their social status, their cultural background, and their dialect and, thus, it is not certain whether tasks such as these can be used to make reliable generalizations about natural conversation [73]. For these reasons, researchers have started to record smart homes or work situations to further achieve even higher levels of social naturalistic data. Representative examples are the collection of natural telephonic data that have been gathered by recording large numbers of real phone conversations, as in the Switchboard corpus [74] or audio corpora of nontelephonic spoken interaction or even collections of everyday interactions by having subjects wear a microphone during their daily lives for extended periods due to the great level of advancements in the area of pervasive computing [7577].

However, the main criticism of that type of data is that they do not address all aspects of social interactions. Consequently, the existing resources should be revisited and repurposed every time new research questions arise. The above presented reasons justify the quality of data that we have so far, where the context is relatively stable (meetings, radio programs, laboratory sessions, etc.) and the variability related to such a factor is limited. Thus, there is a need for having mechanisms to collect feedback from users in the wild (such as software systems upon smartphones that ran continuously in the background to monitor user’s mood and emotional states), to further establish large-scale spontaneous affect databases efficiently with very low cost [77]. This need has been fulfilled by the great level of advancement with respect to such a situation as follows: the diffusion of mobile devices equipped with multiple sensors [78] and the advent of Big Data [79].

Mobile devices can collect a large amount of contextual information (geographic position, proximity to other people, audio environment, etc.) for extended periods of time. Big Data analytics can make sense of that data and provide information about context and its effect on behavior. Thus, it is possible to overcome limitations such as the collection of affect-related data in a large population as well as having involved participants in the experiment for too long. With the advent of powerful smart devices with built-in microphones [80], Bluetooth patterns, cameras, usage log, and so on, it is possible for researchers to identify new ways for capturing spontaneous face expression databases. Unfortunately, these studies have been carried out mainly in a social context (person-person communication) and only through acted scenarios. Further studies are needed in a variety of contexts to establish a better understanding of this relationship and identify whether and how these models could be generalized over different types of tactile behavior, activity, context, and personality traits. However, most of the approaches concentrate on offline analysis and no results that take context into account that could clarify any ambiguities in the interpretation of social cues have been presented so far.

Due to the huge growth of collecting wearable data in the wild and access to more contextual information, respectively, affect analysis has recently started to move into the realm of Big Data. For example, in terms of physiological data, having enough participants being able to own and wear sensors at all times and being willing to allow contextual data to be collected from their phones, it might allow a large collection of physiological signals with high-confidence affect labels. Data could then be labelled with both self-report and contextual information such as time of day, weather, activity, and who the subject was with so as to make an assessment of affective state. Consequently, with sufficiently ground truth datasets, it will likely be able to develop better contextually aware algorithms for individuals and like groups even if the sensor data are noisier. These algorithms will enable HCI in a private, personal, and continuous way and allow our sensors to both know us better and be able to communicate more effectively on our behalf with the world around us. Taking into account the fact that personalization is desirable, that is, the system adapts itself to the user by regarding this behavior, emotions, and intentions, specifically this leads to technologies with companion-like characteristics [8183] that can interact with a certain user in a more efficient way independent of the contextual social situation and the environment.

Another important issue is the interplay among the personality, the situational context, and the contextualized behavior. The problem of context has been controversial in the HCI community [37, 8486]. The ultimate goal is to have context-aware technology that is capable of working and interacting differently depending on the context (e.g., a phone should not ring during a meeting). The key issue is how to encode and represent context, even in the case of identifying a set of features of the surrounding environment, location, identities of the people around the user, and so forth [36]. Furthermore, of equal importance is the understanding of how people achieve and maintain a mutual understanding of the context according to their dependency [9] or how the social relations are structured in small [87] and large groups (friends, colleagues, families, students, etc.) and finally how the changes in individuals’ behaviors [43, 88] and attitudes occur due to their membership in social and situational settings.

So far, the issue is still open for technologies dealing with social and psychological phenomena like personality [89]. Besides the difficulties in representing context, current approaches for human behavior understanding (facial expression analysis, speaker diarization, action recognition, etc.) are still sensitive to factors like illumination changes, environmental noise, or sensor placement. It is not clear whether personality should be considered as a stable construct or as a process that involves changes and evolution over time, as this decision depends on how it is measured and aggregated [90]. In this view, personality ranges from highly stable and trait-like to highly variable and adaptive to context.

Particularly, data from smart wearable devices can indicate personality traits using machine learning approaches to extract useful features, providing fruitful pathways to study relationships between users and personalities, by building social networks with the rich contextual information available in applications usage, call, and SMS logs. “Designing” smart homes in terms of enhancing the comfort is also challenging for mobile emotion detection. The friendly design of an intelligent ecosystem responsive to our needs that can make users feel more comfortable for affective feedback collection and may change user’s social behavior is very promising to boost the affect detection performance and explore the possibility of further HCI techniques.

Moreover, it is necessary to discover new emotional features, which may exist in application logs, smart device usage patterns, locations, order histories, and so forth. There is a great need to thoroughly monitor and investigate the new personality and behavioral features. In other words, establishing new HCI databases in terms of new social features could be a very significant research topic and could bring “ambient intelligence” in the home closer to reality.

Gradually, the new multidisciplinary area that lies at the crossroads between Human-Computer Interaction (HCI), social sciences, linguistics, psychology, and context awareness is distinguishing itself as a separate field. It is thus possible to better recognize, interpret, and process “recipes,” to incorporate contextual information, and finally to understand the related ethical issues about the creation of homes that can enhance shelter. For applications in fields such as real-time HCI and big social data analysis [91], deep natural language understanding is not strictly required; a sense of the semantics associated with text and some extra information such as social parameters associated with such semantics are often sufficient to quickly perform tasks such as capturing and modeling social behavior.

Semantic context concept-based approaches [9295] aim to grasp the conceptual and affective information associated with natural language semantic rules. Additionally, concept-based approaches can analyze multiword expressions that do not explicitly convey emotion but are related to concepts that do. Rather than gathering isolated rules about a whole item (e.g., iPhone 5), users are generally more interested in comparing different products according to their specific features (e.g., iPhone 5’s versus Galaxy S3’s touchscreen), or even subfeatures (e.g., fragility of iPhone 5’s versus Galaxy S3’s touchscreen). This taken-for-granted information referring to obvious things people normally know and usually leave unstated/uncommented, in particular, is necessary to properly deconstruct natural language text into rules, for example, to appraise the concept small room as negative for a hotel review and small queue as positive for a post office or the concept “go read the book” as positive for a book review but negative for a movie review.

Context-level analysis also ensures that all gathered rules are relevant for the specific user. In the era of social context (where intelligent systems have access to a great deal of personal identities and social dependencies), such rules will be tailored to each user’s preferences and intent. Irrelevant opinions will be accordingly filtered with respect to their source (e.g., a relevant circle of friends or users with similar interests) and intent.

3.4. Pervasive Context Awareness Environments
3.4.1. Context Sources

Context data in a smart pervasive environment such as a smart home can come from various sources as follows:(i)In-place sensors such as temperature, humidity, luminosity, noise, or human presence sensors located in the various rooms or outside, in the vicinity of the house(ii)Power and water consumption meters of the house(iii)Smart city sensors providing additional information such as pollution levels, temperature, and total electrical power consumption of the city, optionally with geospatial information

3.4.2. Home Rules

Users sometimes need their appliances to perform a specific action in their house taking into account the context information. For example, they may not want to wash clothes when it is raining or the temperature in the city is quite low. For this reason, there are defined actions for the smart home system. These actions are called home rules. These home rules are handling whether the appliances should be switched on or off.

In a more high-level approach, the structure of the home rules can be customized as “if it is valid, do/do not do that.” Figure 2 illustrates an example of that.

Figure 2: Home rules structure.

The “if it is valid, do/do not do that” structure consists of three parts:(i)“If it is valid,” a trigger that consists of the following:(a)An input type and the value of the input that is defined by pervasive and context information such as the ones described in Section 3.4(b)An operator (c)A reference value, which is input by the user (e.g., 20 degrees Celsius)(ii)“Do/do not,” what to do when the rule is triggered, where any smart home system action/reaction can be inserted(iii)“That,” which consists of an optional parameter (e.g., lower the house blinds by using that percentage)

Moreover, more complex rules such as the temperature in specific interval of values are expressed with multiple rules that are logically joined together.

4. Semantic Representation

In this section, semantic technologies are used in order to represent the knowledge of an ecosystem. In general terms, an ecosystem with respect to the Internet of Things (IoT) which is often considered as the next step in Ubiquitous Computing [96] is particular IoT implementation (a smart grid, a smart home, a smart city, or personalized wearables) focusing on standards, protocols, or abilities from the technical perspective while at the same time analyzing the social relationships of the users from a social point perspective. According to the formal definition given in [97], an ecosystem consists of a set of solutions that enable, support, and automate the activities and transactions by the actors in the associated social environment. Furthermore, it enables relationships among the sensors, the actuators (complex devices), and their users. The relationships are based on a common platform and operate through the exchange of information, resources, and artifacts [98]. In our work, we merge the two areas of IoT ecosystems implementation: home automation systems (smart homes) and IoT based solutions for smart cities. Particularly, our ecosystem consists of cities, comprising a number of houses. Additionally, in every city and in every house, a number of sensors are located which give data for the environmental context, for example, humidity and temperature. They are also able to give more specific information such as noise and pollution levels or information about the human presence inside the house. All these data are received from the sensors and are stored in a database.

In this ecosystem, we can define a number of rules, which we will call home rules, for example, defining under which conditions house appliances should be switched on or off. Another more concrete example would be “do not operate the air-condition when the outside temperature is high.”

The OWL 2 Web Ontology Language (OWL 2) [99], an ontology language for the Semantic Web with formally defined meaning, was adopted for the semantic representation of our ecosystem. OWL 2 ontologies provide classes, properties, individuals, and data values and they are stored as Semantic Web entities. The following sections (from Section 4.1 to Section 4.4) explain in more detail how the ecosystem is represented by our ontology. The ontology was created using the open-source Protégé 4.2 platform [100].

4.1. Ontology Hierarchy

Figure 3(c) illustrates the ontology’s hierarchy. The ontology’s classes describe different aspects of the ecosystem which may be as follows:(i)The Appliances which contain all the different types of the ecosystem’s appliances, such as (a) the refrigerator, (b) the washing machine, (c) the air-condition, and (d) the television(ii)The Location, which contains both the house and the city(iii)The Sensor, which is a class that contains the individuals of all the existing sensors(iv)The Person, which contains all the individuals(v)The Gender, the HouseRole, and the SocialStatus which for the different types of gender, house roles, and social status implement the user model

Figure 3: An example of the ontology properties, the hierarchical structure, and the individuals used for our experiments.
4.2. Properties

The ontology also comprises a series of properties. These properties are both object properties and data properties. Object properties provide ways to relate two objects (also called predicates). Object properties relate two objects (classes), of which one is the domain and the other is the range. The object properties of the ontology of this ecosystem are mainly used to relate the sensors with a specific location and the inhabitants of the house and the appliances. Some of the ontology’s object properties are described below:(i)hasGender, which relates a Person class with a Gender class according to Section 4.1(ii)hasSensor, which relates a Sensor class with a specific location(iii)hasHouseRole, which relates a Person class with a house role(iv)isLocatedIn, which relates a house with a city(v)livesIn, which relates a person with a house(vi)builtIn, which relates a house with a city

On the other hand, data properties are similar to object properties with the sole difference that their domains are typed words. In our ontology, they relate the actual sensor values with a sensor, power on or off status of the appliances, and the user properties with numerical features. Some of them are described below:(i)hasNoise, which relates a sensor with the actual captured noise value, for example, 40 dB(ii)hasTemperature, which relates a sensor with the actual captured temperature value, for example, 25°C(iii)isOn, which has a true value if the appliance is turned on and is false otherwise(iv)numberOfChildren, which relates a person with the number of his/her children, which must be a nonnegative integer

The object’s and the data’s properties of the ontology appear in Figure 3.

4.3. Individuals

The ecosystem in all contains a large number of appliances, sensors, and people. Every single appliance, sensor, and person is represented in the ontology as an individual of the Appliances, Sensor, or Person class, respectively. Figure 3(d) illustrates a small set of individuals contained in the ontology.

4.4. Rules and Consistency Check

In the current section, we provide a novel semantic representation of the home rules of the ecosystem. These home rules are expressed using the Semantic Web Rule Language (SWRL) [101]. SWRL has the full power of OWL DL, only at the price of decidability and practical implementations. However, decidability can be regained by restricting the form of admissible rules, typically by imposing a suitable safety condition. Rules have the form of an implication between an antecedent (body) and a consequent (head). This meaning can be read as follows: “whenever the conditions that are specified in the antecedent may hold, the conditions that are specified in the consequent must also hold.” A critical property of our ontology is that the ontology should always be consistent, a condition that is verified with the use of a Pellet reasoner [102]. Thereat, whenever a home rule is violated, an inconsistency must be detected. Taking this into account, whenever the conditions that are specified in the antecedent hold, the conditions specified in the consequent must also hold; hence, the home rule’s violation is transformed to the respective antecedent of the SWRL.

For this reason, a data restriction has to be created in the Appliances class. A data property called “restriction” is created. Its domain is an appliance and its range is boolean, but it is also restricted to create an appliance with the restriction property. Then, every home rule is transformed to a SWRL, and if the left side of the rule is satisfied, it leads to the creation of the “restriction” property for an appliance. This makes our ontolgy inconsistent; in other words, the appliance is restricted to start working. So every time a database record changes or a new one is added, the ontology individuals are populated with the new values querying the database. Then, using the Pellet reasoner, the system checks for possible existence of any inconsistency. Finally, the inconsistency is being handled by forcing the appliance to switch off or switch on. Using the Semantic Web technologies, the restriction is added to every appliance in order not to create any restriction data property for any individual of the class after the reasoning. In this subsection, some indicative home rules transformed to SWRLs are presented.(1)Do not operate any washing machine when the external temperature is greater than 26°C:  City(?city) House(?house) Sensor(?sensor) WashingMachine(?wm) builtIn(?house, ?city) hasSensor(?city, ?sensor) isLocatedIn(?wm, ?house) hasTemperature(?sensor,?temperature) isOn(?wm, true) greaterThan(?temperature, 26) =>restriction(?wm, true)(2)The washing machine must not be operating if a person is in the house and there exists too much noise:  House(?house) Person(?per) Sensor (?sensor) WashingMachine(?wm) hasSensor(?house, ?sensor) isLocatedIn(?wm, ?house) personFound(?sensor, ?per) hasNoise(?sensor, ?noise) isOn(?wm, true) greaterThan(?noise, 40) =>restriction(?wm, true)(3)If the local time is between 10 p.m. and 8 a.m., the television must not be switched on:  Television(?tv) House(?house) Sensor(?sensor) isOn(?tv,true) isLocatedIn(?tv,?house) hasSensor(?house,?sensor) hasHour(?sensor,?hour) greaterThan(?hour, 22) =>restriction(?tv, true) Television(?tv) House(?house) Sensor(?sensor) isOn(?tv,true) isLocatedIn(?tv,?house) hasSensor(?house,?sensor) hasHour(?sensor,?hour) lessThan(?hour, 8) =>restriction(?tv, true)

As it is clear, the built-ins for SWRLs, such as “equal,” “lessThan,” “greaterThan,” “lessThanOrEqual,” and “lessThanOrEqual,” are used for comparisons. By using these built-ins, it is possible to create home rules in which a value comparison of environmental values is needed such as the temperature, the humidity, and the noise level, or more elaborated boolean values such as the human presence detection in a house. Additionally, rules can be used in conjunction between each other in order to express more elaborated rules, such as the third home rule.

5. Experiments

In this section, we present the rudiments of what constitutes SandS, our smart home environment, which we define as a city in which information and communication technologies are merged with traditional infrastructures, coordinated, and integrated using the IoT technologies. These technologies establish the functions of the city and also provide ways in which citizen groups can interact in augmenting their understanding of the city and also provide essential engagement in the design and planning process. We first sketch our vision defining three goals which concern us: feeding the home rules with the signals provided by the smart city system, to represent a simple interoperability test; introducing limitations on the use of the appliances related to environment conditions, like the power or water consumption reckoned by the city environment sensors, the short-term weather forecasting, and so forth, which represents a logical test on the DI scheduler and consistency checker; and managing alarm messages sent by the municipality. We begin by presenting how our data have been collected within a social network in order to create and exchange content in the form of so-called recipes and to develop collective intelligence which adapts its operation through appropriate feedback provided by the user. Additionally, we approach SandS from the user’s perspective and illustrate how users and their relationships can be modeled through a number of fuzzy stereotypical profiles (user-centered experimental validation). Furthermore, the context modeling in our smart home paradigm is examined through appropriate representation of context cues in the overall interaction (pervasive experimental validation).

5.1. Data Collection

In this subsection, we present our approach towards the vision of smart home that supports inhabitants’ high-level goals, emphasizing collecting our data in the wild in terms of having been captured in real-world and unconstrained conditions. Thus, our smart home technologies deal with interference with IoT technologies and react to nonstandard situations. More precisely, data was collected by the SandS consortium and partners during a small-scale mockup according to the “in-house” and “out-house” sensors such as mobility sensors, traffic and parking sensors, environmental sensors, and park and garden irrigation sensors, respectively. Finally, this context data information collected through the sensors is sent periodically to the ecosystem. These values are stored in a specific table of a database overwriting the previous record that was stored.

5.1.1. User Models

Regarding the experimental dataset to validate the formation of personas, data was collected by the SandS consortium and partners during a small-scale mockup. SandS also opened up its user base towards the FIRE and related communities such as the Open Living Labs. The dissemination call for user participation pointed to a user registration form, illustrated in Figure 4.

Figure 4: SandS user registration form.

This registration form comprised several user-related fields: first name, last name, date of birth, senior/junior, gender, single/married, and city.

5.1.2. Smart City Sensors

In large-scale tests of the unified user in a smart home in a smart city, SandS will use context sensor data gathered at SmartSantander. SmartSantander [14], born as a European project, is turning into a living experimental laboratory as part of the EU’s Future Internet initiative. Major companies involved in the project include Telefonica Digital, the company’s R&D wing, along with other smaller suppliers as well as utility and service companies. In terms of application areas, five main areas have initially been targeted in the trials so far: traffic management and parking, street lighting, waste disposal management, pollution monitoring, and parks and garden management. To this aim, the city of Santander, Spain, has been equipped with a large number of sensors (Figure 5) used to collect a huge amount of information. We can divide the sensors into several categories based on the data they should collect.(i)Mobility sensors: they are placed on buses, taxis, and police cars. They are in charge of measuring main parameters associated with the vehicle (GPS position, altitude, speed, course, and odometer)(ii)Traffic and parking sensors: they are buried under the asphalt. They are accountable for sensing the corresponding traffic parameters (traffic volumes, road occupancy, vehicle speed, queue length, and free parking availability)(iii)Environmental sensors: the task is to collect data concerning temperature, noise, light, humidity, wind speed, and detection of specific gases like CO, PM10, O3, and NO2(iv)Park and garden irrigation sensors: in order to control and make the irrigation in certain parks and gardens more efficient, these sensors register information about wind’s speed, quantity of rain, soil temperature, soil humidity, atmospheric pressure, solar radiation, air humidity, and temperature, as well as water consumption

Figure 5: SmartSantander sensors locations.

At the moment, the data collected by these sensors are stored in the USN/IDAS SmartSantander cloud storage platform. This platform stores in its databases all the observations and measurements gathered by the sensors. It contains live and historical data. These databases are migrating on the Fi-lab platform as an instance of the FIWARE [103] ecosystem.

In very minimal terms, our experiments will manage the integration of the two systems only in one direction: by exploiting SmartSantander data in favor of SandS with special regard to the empowerment of the home rules used by the domestic infrastructure (DI), which is the core of the proposed system and handles the home rules and the appliances, manages the users, and updates the database with any new value gathered from a sensor. Hence, the contact between the two systems will happen via the home rules which may be fed by the smart city sensor data either in their current version or in an enlarged one to be capable of profiting from the data. Available sensor data, related to the SandS domain, include the following: temperature, noise, light, humidity, and quantity of rain. Other data, for instance, those concerning traffic, could be considered in a more long-term planning and scheduling approach.

Finally, our goal would be to stress the following case studies:(1)Feeding the home rules with the signals provided by the smart city system. It represents a simple interoperability test(2)Introducing limitations on the use of the appliances related to environment conditions, such as the power or water consumption reckoned by the city environment sensors and the short-term weather forecasting. It represents a logical test on the DI scheduler and consistency checker(3)Managing alarm messages sent by the municipality. It will represent a stress test for the entire system

5.1.3. Sensor Integration

In the ecosystem, there are sensors both in every house and for the whole city. These sensors send periodically information about the temperature, the luminosity, and the humidity. Both the in-house and the city sensors send the values of the sensors periodically to the ecosystem. These values are stored in a specific table of a database overwriting the previous record that was stored. The in-house sensors send information about the humidity in the house, the inside house temperature, the human presence in it, the power consumption and the water consumption of all the appliances inside it, the location where the sensor is installed (e.g., the kitchen, the bathroom, or the bedroom), the noise, and the local timestamp. Moreover, the city sensor values are collected at a specific moment using the FIWARE Ops tools ( [104]. The data of the sensors are periodically sent to the system in a JSON format using an HTTP connection. Then, the JSONs are parsed and the information is stored to the database. The city sensors, like SmartSantander [14], are sending information of the noise inside the city, the temperature, and the exact location where they are installed. Adding all these pieces of information of the sensors to a database, it is every time feasible for the system to identify the exact condition inside and outside the house, where the sensors are installed, just by doing a simple query in the database. Due to the structure of the home rules, it is possible in a very short time for the ecosystem to know if a home rule is triggered and if an appliance in a house should be switched on or off.

5.2. User-Centered Experimental Validation

A user can get the best recipe for him by comparing his request for a recipe with other users’ requests of using the fuzzy similarity method presented in Section 2.4. The fuzzy similarity method is taking into account both the similarity of the users (e.g., their gender, age, house role) and the similarity between the request parameters. A request parameter for a request of a recipe in order to bake bread might be the crustiness, the amount of the water that should be used for the dough, or the flour’s type that is going to be used. Figure 6 illustrates a form where a user can insert his database ID and some request parameters in order to get the similarity with other requests. Then, clicking the submit button, a table with all the requests of other users, ranked by their total similarity, is returned, as the one illustrated in Figure 7. The first column shows the total similarity taking into account both the user similarity and the similarity of request parameters. The sixth column shows only the user similarity and the fourth only the request parameters similarity. The fifth column shows the satisfaction of the users that have used this recipe in the past. One means “fully satisfied” and, on the other hand, zero means “not satisfied at all.”

Figure 6: SandS recipe request similarity form.
Figure 7: SandS recipe request similarity resulting table.
5.3. Pervasive Experimental Validation

The system is periodically querying the database and, more specifically, the collection where the sensor values are stored. Then, using the home rules, which have been added in the ecosystem, it checks whether the consistency of the ontology still holds for the new sensor values. If any of the home rules is triggered, it denotes that an inconsistency has been detected from the system for a specific appliance. This specific appliance is switched off, until none of the home rules related to this appliance are inconsistent. As it has been mentioned previously, a home rule could be triggered by both the in-house sensor value changes and the value changes detected by the SmartSantander sensors. In order to be clear, an example is presented. Figure 8(a) illustrates the noise levels in the house, which follow the Gaussian function. These values are received by the in-house noise detection sensors and they are stored in the database. In addition, Figure 8(b) presents the human presence in the house at the same period of time. In case the value is equal to one, this means that there exists a human in the house this specific period of time. In case there is not anyone in the house, the value is equal to zero. Considering that the house is a part of the ecosystem where the home rules presented in Section 4.4 are defined, the second home rule is triggered. At the beginning, the washing machine is switched on, executing the clothes’ washing program, until the noise volume tides over 40 dB at 10:00. Then, the appliance is switched off until 18:00, when the noise levels fall below 40 dB. In case a washing program for the clothes was interrupted during its execution, the program starts its execution from the beginning or continues from the step it was stopped, depending on the users’ choices. If an inconsistency is detected but the washing machine is not executing any laundry program or it is scheduled to start it immediately, then, the washing machine is switched off, without affecting any scheduled process.

Figure 8: In-house sensor values of the noise and the human presence.

Moreover, in case the system receives from a city sensor, such as the SmartSantander sensors, temperature values equal to or greater than 26°C, then the first home rule would be triggered because an inconsistency would have been detected. As a result, the house’s washing machine would be switched off. The temperature values of such an occasion are presented in Figure 9. Between 11:00 and 15:00, a city sensor receives temperature values higher than 26°C. Consequently, an inconsistency is detected, which forces the house’s washing machine to switch off. Finally, later than 15:00, when the temperature is again lower than 26°C, the washing machine is switched on again.

Figure 9: SmartSantander sensor values of the temperature for a specific period in a day.

6. Conclusions and Future Work

In this paper, we illustrated how the emerging semantics of the smart home environments can be captured through a novel formalism and how expert knowledge can be used to ensure semantic interoperability. User stereotypes or personas on the one hand provide flexibility, extensibility, reusability, and applicability and on the other hand knowledge management is incorporated as an efficient user and context model representation formalism. In addition, this formal, machine-processable representation is used in order to define, extract, and use a set of concepts and their fuzzy semantic relations. This user modeling approach is put into a rich smart home context representation which abstracts raw sensor data to a high-level semantic representation language in which complex home rules can be defined.

Future work includes further incorporation of user, usage, and context information, through a unified semantic representation, driving an adaptation mechanism aiming to provide a personalized service and optimizing the user experience. Among the aspects of the architecture that will be stressed through experimental validation is the computational cost and the scaling of SandS to a wider user group. Based on the SandS architecture, the cloud infrastructure ensures the optimal handling of the computational load since the intermediate processes are not computationally demanding. On the other hand, issues that may arise from the scaling of the platform application are part of the experimental validation since the load is directly correlated with the user activity. The large-scale validation at SmartSantander will provide us with useful insights about the latter.

Competing Interests

The authors declare that they have no competing interests.


This work was supported by the European Commission under Contract FP7-317947, FIRE project, “Social & Smart.”


  1. B. Shneiderman, Universal Usability: Pushing Human-Computer Interaction Research to Empower Every Citizen, 1999.
  2. B. Kules, “User modeling for adaptive and adaptable software systems,” in Proceedings of the ACM Conference on Universal Usability (CUU '00), pp. 16–17, November 2000.
  3. A. Kobsa, “Generic user modeling systems,” User Modeling and User-Adapted Interaction, vol. 11, no. 12, pp. 49–63, 2001. View at Google Scholar
  4. L. Nielsen, Personas. The Encyclopedia of Human-Computer Interaction, The Interaction Design Foundation, Aarhus, Denmark, 2nd edition, 2013.
  5. A. Cooper, The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity, 2004.
  6. D. Saffer, Designing for Interaction: Creating Smart Applications and Clever Devices, New Riders Press, 2007,
  7. J. Pruitt and T. Adlin, The Persona Lifecycle: Keeping People in Mind Throughout the Design Process, 2006.
  8. D. Norman, “Ad-hoc personas empathetic focus,” 2004,
  9. A. Zimmermann, A. Lorenz, and R. Oppermann, “An operational definition of context,” in Modeling and Using Context, B. Kokinov, D. C. Richardson, T. R. Roth-Berghofer, and L. Vieu, Eds., vol. 4635 of Lecture Notes in Computer Science, pp. 558–571, Springer, Berlin, Germany, 2007. View at Publisher · View at Google Scholar
  10. M. Baldauf, S. Dustdar, and F. Rosenberg, “A survey on context-aware systems,” International Journal of Ad Hoc and Ubiquitous Computing, vol. 2, no. 4, pp. 263–277, 2007. View at Publisher · View at Google Scholar · View at Scopus
  11. C. Bettini, O. Brdiczka, K. Henricksen et al., “A survey of context modelling and reasoning techniques,” Pervasive and Mobile Computing, vol. 6, no. 2, pp. 161–180, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. M. Eisenhauer, P. Rosengren, and P. Antolin, “A development platform for integrating wireless devices and sensors into ambient intelligence systems,” in Proceedings of the 6th Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Communications and Networks Workshops, pp. 1–3, IEEE, Rome, Italy, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  13. SandS, “Sands-project,” 2015,
  14. SmartSantander, 2015,
  15. G. Fischer, “User modeling in human-computer interaction,” User Modeling and User-Adapted Interaction, vol. 11, no. 1-2, pp. 65–86, 2001. View at Publisher · View at Google Scholar · View at Scopus
  16. P. Gregor, A. F. Newell, and M. Zajicek, “Designing for dynamic diversity: interfaces for older people,” in Proceedings of the 5th International Conference on Assistive Technologies (ASSETS '02), pp. 151–156, ACM, Edinburgh, UK, July 2002.
  17. M. Hatala and R. Wakkary, “Ontology-based user modeling in an augmented audio reality system for museums,” User Modelling and User-Adapted Interaction, vol. 15, no. 3-4, pp. 339–380, 2005. View at Publisher · View at Google Scholar · View at Scopus
  18. F. Pereira, “A triple user characterization model for video adaptation and quality of experience evaluation,” in Proceedings of the IEEE 7th Workshop on Multimedia Signal Processing (MMSP '05), pp. 1–4, IEEE, Shanghai, China, November 2005. View at Publisher · View at Google Scholar · View at Scopus
  19. U. Persad, P. Langdon, and J. Clarkson, “Characterising user capabilities to support inclusive design evaluation,” Universal Access in the Information Society, vol. 6, no. 2, pp. 119–135, 2007. View at Publisher · View at Google Scholar · View at Scopus
  20. D. Heckmann, E. Schwarzkopf, J. Mori, D. Dengler, and A. Kroner, “The user model and context ontology GUMO revisited for future web 2.0 extensions,” in Contexts and Ontologies Representation and Reasoning, p. 42, 2007. View at Google Scholar
  21. C. Evers, R. Kniewel, K. Geihs, and L. Schmidt, “Achieving user participation for adaptive applications,” in Ubiquitous Computing and Ambient Intelligence, pp. 200–207, Springer, Berlin, Germany, 2012. View at Google Scholar
  22. R. Casas, R. Blasco Marín, A. Robinet et al., “User modelling in ambient intelligence for elderly and disabled people,” in Computers Helping People with Special Needs: 11th International Conference, ICCHP 2008, Linz, Austria, July 9–11, 2008. Proceedings, vol. 5105 of Lecture Notes in Computer Science, pp. 114–122, Springer, Berlin, Germany, 2008. View at Publisher · View at Google Scholar
  23. E. Castillejo, A. Almeida, D. López-De-Ipiña, and L. Chen, “Modeling users, context and devices for ambient assisted living environments,” Sensors, vol. 14, no. 3, pp. 5354–5391, 2014. View at Publisher · View at Google Scholar · View at Scopus
  24. A. Johnson and N. Taatgen, User Modeling, Handbook of Human Factors in Web Design, Lawrence Erlbaum Associates, 2005.
  25. P. T. Aquino Jr. and L. V. L. Filgueiras, “User modeling with personas,” in Proceedings of the Latin American Conference on Human-Computer Interaction (CLIHC '05), pp. 277–282, October 2005. View at Publisher · View at Google Scholar · View at Scopus
  26. P. Castells, M. Fernández, D. Vallet, P. Mylonas, and Y. Avrithis, “Self-tuning personalized information retrieval in an ontology-based framework,” in On the Move to Meaningful Internet Systems 2005: OTM 2005 Workshops, pp. 977–986, Springer, Berlin, Germany, 2005. View at Google Scholar
  27. D. L. McGuinness and F. Van Harmelen, “Owl web ontology language overview,” W3C Recommendation, 2004. View at Google Scholar
  28. M. Hepp, “Possible ontologies: how reality constrains the development of relevant ontologies,” IEEE Internet Computing, vol. 11, no. 1, pp. 90–96, 2007. View at Publisher · View at Google Scholar · View at Scopus
  29. F. Baader, The Description Logic Handbook: Theory, Implementation, and Applications, Cambridge University Press, Cambridge, UK, 2003. View at MathSciNet
  30. T. R. Gruber, “A translation approach to portable ontology specifications,” Knowledge Acquisition, vol. 5, no. 2, pp. 199–220, 1993. View at Publisher · View at Google Scholar · View at Scopus
  31. S. Miyamoto, Fuzzy Sets in Information Retrieval and Cluster Analysis, vol. 4 of Theory and Decision Library, Series D: System Theory, Knowledge Engineering and Problem Solving, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1990. View at Publisher · View at Google Scholar · View at MathSciNet
  32. G. J. Klir and B. Yuan, Fuzzy Sets and Fuzzy Logic, Prentice Hall, New Jersey, NJ, USA, 1995. View at MathSciNet
  33. S. Calegari and D. Ciucci, “Fuzzy ontology, fuzzy description logics and fuzzy-owl,” in Applications of Fuzzy Sets Theory, pp. 118–126, Springer, Berlin, Germany, 2007. View at Google Scholar
  34. E. Y. Song and K. B. Lee, “Service-oriented sensor data interoperability for IEEE 1451 smart transducers,” in Proceedings of the IEEE Intrumentation and Measurement Technology Conference (I2MTC '09), pp. 1043–1048, IEEE, Singapore, May 2009. View at Publisher · View at Google Scholar · View at Scopus
  35. F. Bonin, R. Bock, and N. Campbell, “How do we react to context? Annotation of individual and group engagement in a video corpus,” in Proceedings of the International Conference on Privacy, Security, Risk and Trust (PASSAT '12) and International Confernece on Social Computing (SocialCom '12), pp. 899–903, IEEE, Amsterdam, The Netherlands, September 2012. View at Publisher · View at Google Scholar
  36. P. J. Brown, J. D. Bovey, and X. Chen, “Context-aware applications: from the laboratory to the marketplace,” IEEE Personal Communications, vol. 4, no. 5, pp. 58–64, 1997. View at Publisher · View at Google Scholar · View at Scopus
  37. Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, “A survey of affect recognition methods: audio, visual, and spontaneous expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 39–58, 2009. View at Publisher · View at Google Scholar · View at Scopus
  38. R. Böck, S. Glüge, A. Wendemuth et al., “Intraindividual and interindividual multimodal emotion analyses in human-machine-interaction,” in Proceedings of the IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA '12), pp. 59–64, IEEE, New Orleans, La, USA, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  39. Z. Hammal and J. F. Cohn, “Intra-and interpersonal functions of head motion in emotion communication,” in Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges (RFMIR '14), pp. 19–22, ACM, Istanbul, Turkey, 2014.
  40. R. El Kaliouby, P. Robinson, and S. Keates, “Temporal context and the recognition of emotion from facial expression,” in Proceedings of the HCI International Conference, pp. 631–635, American Psychological Association, 2003.
  41. H. R. Knudsen and L. H. Muzekari, “The effects of verbal statements of context on facial expressions of emotion,” Journal of Nonverbal Behavior, vol. 7, no. 4, pp. 202–212, 1983. View at Publisher · View at Google Scholar · View at Scopus
  42. T. Masuda, P. C. Ellsworth, B. Mesquita, J. Leu, S. Tanida, and E. Van de Veerdonk, “Placing the face in context: cultural differences in the perception of facial emotion,” Journal of Personality and Social Psychology, vol. 94, no. 3, pp. 365–381, 2008. View at Publisher · View at Google Scholar · View at Scopus
  43. R. Böck, S. Glüge, I. Siegert, A. Wendemuth, and S. Glüge, “Annotation and classification of changes of involvement in group conversation,” in Proceedings of the Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII '13), pp. 803–808, Geneva, Switzerland, September 2013. View at Publisher · View at Google Scholar
  44. J. Joshi, H. Gunes, and R. Goecke, “Automatic prediction of perceived traits using visual cues under varied situational context,” in Proceedings of the 22nd International Conference on Pattern Recognition (ICPR '14), pp. 2855–2860, IEEE, Stockholm, Sweden, August 2014. View at Publisher · View at Google Scholar · View at Scopus
  45. B. Schilit, N. Adams, and R. Want, “Context-aware computing applications,” in Proceedings of the Workshop on Mobile Computing Systems and Applications, pp. 85–90, December 1994. View at Scopus
  46. N. S. Ryan, J. Pascoe, and D. R. Morse, “Enhanced reality fieldwork: the context-aware archaeological assistant,” in Computer Applications in Archaeology, British Archaeological Reports, V. Gaffney, M. van Leusen, and S. Exxon, Eds., pp. 182–196, Tempus Reparatum, 1998. View at Google Scholar
  47. P. J. Brown, “The stick-e document: a framework for creating context-aware applications,” j-EPODD, vol. 8, no. 2-3, pp. 259–272, 1995. View at Google Scholar
  48. D. Franklin and J. Flaschbart, “All gadget and no representation makes jack a dull environment,” in Proceedings of the AAAI Spring Symposium on Intelligent Environments, pp. 155–160, Menlo Park, Calif, USA, 1998.
  49. A. Ward, A. Jones, and A. Hopper, “A new location technique for the active office,” IEEE Personal Communications, vol. 4, no. 5, pp. 42–47, 1997. View at Publisher · View at Google Scholar · View at Scopus
  50. T. Rodden, K. Cheverst, K. Davies, and A. Dix, “Exploiting context in HCI design for mobile systems,” in Proceedings of the Workshop on Human Computer Interaction with Mobile Devices, pp. 21–22, Citeseer, Glasgow, UK, May 1998.
  51. R. Hull, P. Neaves, and J. Bedford-Roberts, “Towards situated computing,” in Proceedings of the 1st International Symposium on Wearable Computers, pp. 146–153, October 1997. View at Scopus
  52. G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles, “Towards a better understanding of context and context-awareness,” in Handheld and Ubiquitous Computing, H.-W. Gellersen, Ed., vol. 1707 of Lecture Notes in Computer Science, pp. 304–307, Springer, Berlin, Germany, 1999. View at Publisher · View at Google Scholar
  53. Z. Duric, W. D. Gray, R. Heishman et al., “Integrating perceptual and cognitive modeling for adaptive and intelligent human-computer interaction,” Proceedings of the IEEE, vol. 90, no. 7, pp. 1272–1289, 2002. View at Publisher · View at Google Scholar · View at Scopus
  54. K. Henricksen, J. Indulska, and A. Rakotonirainy, “Modeling context information in pervasive computing systems,” in Pervasive Computing, pp. 167–180, Springer, Berlin, Germany, 2002. View at Google Scholar
  55. T. Gu, X. H. Wang, H. K. Pung, and D. Q. Zhang, “An ontology-based context model in intelligent environments,” in Proceedings of the Communication Networks and Distributed Systems Modeling and Simulation Conference, pp. 270–275, 2004.
  56. A. Schmidt, M. Beigl, and H.-W. Gellersen, “There is more to context than location,” Computers & Graphics, vol. 23, no. 6, pp. 893–901, 1999. View at Publisher · View at Google Scholar · View at Scopus
  57. T. Gu, H. K. Pung, and D. Q. Zhang, “Toward an OSGi-based infrastructure for context-aware applications,” IEEE Pervasive Computing, vol. 3, no. 4, pp. 66–74, 2004. View at Publisher · View at Google Scholar · View at Scopus
  58. A. Almeida and D. López-de-Ipiña, “Assessing ambiguity of context data in intelligent environments: towards a more reliable context managing system,” Sensors, vol. 12, no. 4, pp. 4934–4951, 2012. View at Publisher · View at Google Scholar · View at Scopus
  59. G. Stoilos, G. Stamou, V. Tzouvaras, J. Pan, and I. Horrocks, “Fuzzy owl: uncertainty and the semantic web,” in Proceedings of the International Workshop of OWL: Experiences and Directions, Galway, Ireland, November 2005.
  60. T. Lukasiewicz and U. Straccia, “Managing uncertainty and vagueness in description logics for the semantic web,” Web Semantics: Science, Services and Agents on the World Wide Web, vol. 6, no. 4, pp. 291–308, 2008. View at Publisher · View at Google Scholar · View at Scopus
  61. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon, “Collecting large, richly annotated facial-expression databases from movies,” IEEE MultiMedia, vol. 19, no. 3, pp. 34–41, 2012. View at Publisher · View at Google Scholar · View at Scopus
  62. J. R. Curhan and A. Pentland, “Thin slices of negotiation: predicting outcomes from conversational dynamics within the first 5 minutes,” Journal of Applied Psychology, vol. 92, no. 3, pp. 802–811, 2007. View at Publisher · View at Google Scholar · View at Scopus
  63. G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: a database for studying face recognition in unconstrained environments,” Tech. Rep. 07-49, University of Massachusetts, Amherst, Mass, USA, 2007. View at Google Scholar
  64. V. Jain and E. G. Learned-Miller, “FDDB: a benchmark for face detection in unconstrained settings,” UMass Amherst Technical Report, 2010. View at Google Scholar
  65. C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “300 Faces in-the-wild challenge: the first facial landmark localization challenge,” in Proceedings of the 14th IEEE International Conference on Computer Vision Workshops (ICCVW '13), pp. 397–403, Sydney, Australia, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  66. J. Shen, S. Zafeiriou, G. G. Chrysos, J. Kossaifi, G. Tzimiropoulos, and M. Pantic, “The first facial landmark tracking in-the-wild challenge: benchmark and results,” in Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCVW '15), pp. 1003–1011, Santiago, Chile, December 2015. View at Publisher · View at Google Scholar
  67. I. J. Goodfellow, D. Erhan, P. Luc Carrier et al., “Challenges in representation learning: a report on three machine learning contests,” Neural Networks, vol. 64, pp. 59–63, 2015. View at Publisher · View at Google Scholar · View at Scopus
  68. A. Dhall, R. Goecke, J. Joshi, M. Wagner, and T. Gedeon, “Emotion recognition in the wild challenge 2013,” in Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI '13), pp. 509–516, ACM, Sydney, Australia, December 2013. View at Publisher · View at Google Scholar · View at Scopus
  69. A. Dhall, R. Goecke, J. Joshi, K. Sikka, and T. Gedeon, “Emotion recognition in the wild challenge 2014: baseline, data and protocol,” in Proceedings of the 16th ACM International Conference on Multimodal Interaction (ICMI '14), pp. 461–466, ACM, Istanbul, Turkey, November 2014. View at Publisher · View at Google Scholar · View at Scopus
  70. A. Dhall, O. Ramana Murthy, R. Goecke, J. Joshi, and T. Gedeon, “Video and image based emotion recognition challenges in the wild: Emotiw 2015,” in Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI '15), pp. 423–426, ACM, Seattle, Wash, USA, 2015.
  71. S. Zafeiriou, A. Papaioannou, I. Kotsia, M. A. Nicolaou, and G. Zhao, “Facial affect ‘in-the-wild’: a survey and a new database,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (CVPR '16) Workshops, Affect “In-the-Wild” Workshop, June 2016.
  72. P. Over, G. M. Awad, J. Fiscus et al., TRECVID 2010—An Overview of the Goals, Tasks, Data, Evaluation Mechanisms, and Metrics, 2011.
  73. J. L. Lemke, “Analyzing verbal data: principles, methods, and problems,” in Second International Handbook of Science Education, vol. 24, pp. 1471–1484, Springer, Dordrecht, Netherlands, 2012. View at Google Scholar
  74. J. J. Godfrey, E. C. Holliman, and J. McDaniel, “Switchboard, telephone speech corpus for research and development,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASP '92), vol. 1, pp. 517–520, San Francisco, Calif, USA, March 1992. View at Publisher · View at Google Scholar
  75. D. O. Olguın, P. A. Gloor, and A. S. Pentland, “Capturing individual and group behavior with wearable sensors,” in Proceedings of the AAAI Spring Symposium on Human Behavior Modeling, SSS, vol. 9, Stanford, Calif, USA, March 2009.
  76. J. Staiano, B. Lepri, N. Aharony, F. Pianesi, N. Sebe, and A. Pentland, “Friends don't Lie— Inferring personality traits from social network structure,” in Proceedings of the 14th International Conference on Ubiquitous Computing (UbiComp '12), pp. 321–330, Pittsburgh, Pa, USA, September 2012. View at Publisher · View at Google Scholar · View at Scopus
  77. A. Vinciarelli and A. S. Pentland, “New social signals in a new interaction world: the next frontier for social signal processing,” IEEE Systems, Man, and Cybernetics Magazine, vol. 1, no. 2, pp. 10–17, 2015. View at Publisher · View at Google Scholar
  78. M. Raento, A. Oulasvirta, and N. Eagle, “Smartphones: an emerging tool for social scientists,” Sociological Methods & Research, vol. 37, no. 3, pp. 426–454, 2009. View at Publisher · View at Google Scholar · View at Scopus
  79. M. Minelli, M. Chambers, and A. Dhiraj, Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today's Businesses, John Wiley & Sons, New York, NY, USA, 2012.
  80. K.-H. Chang, D. Fisher, J. Canny, and B. Hartmann, “How's my mood and stress?: an efficient speech analysis library for unobtrusive monitoring on mobile phones,” in Proceedings of the 6th International Conference on Body Area Networks (BodyNets '11), pp. 71–77, Beijng, China, November 2011.
  81. A. Wendemuth and S. Biundo, “A companion technology for cognitive technical systems,” in Cognitive Behavioural Systems, A. Esposito, A. M. Esposito, A. Vinciarelli, R. Hoffmann, and V. C. Müller, Eds., vol. 7403 of Lecture Notes in Computer Science, pp. 89–103, Springer, Berlin, Germany, 2012. View at Publisher · View at Google Scholar
  82. Y. Wilks, “Artificial companions as a new kind of interface to the future internet,” Research Report 13, Oxford Internet Institute, Oxford, UK, 2006. View at Google Scholar
  83. Y. Wilks, “Artificial companions,” in Proceedings of the International Workshop on Machine Learning for Multimodal Interaction, pp. 36–45, Springer, 2004. View at Google Scholar
  84. R. A. Calvo and S. D'Mello, “Affect detection: an interdisciplinary review of models, methods, and their applications,” IEEE Transactions on Affective Computing, vol. 1, no. 1, pp. 18–37, 2010. View at Publisher · View at Google Scholar · View at Scopus
  85. H. Gunes and B. Schuller, “Categorical and dimensional affect analysis in continuous input: current trends and future directions,” Image and Vision Computing, vol. 31, no. 2, pp. 120–136, 2013. View at Publisher · View at Google Scholar · View at Scopus
  86. A. Vlachostergiou, G. Caridakis, and S. Kollias, “Context in affective multiparty and multimodal interaction: why, which, how and where?” in Proceedings of the ACM Workshop on Understanding and Modeling Multiparty, Multimodal Interactions, pp. 3–8, Istanbul, Turkey, November 2014. View at Publisher · View at Google Scholar
  87. D. Gatica-Perez, “Automatic nonverbal analysis of social interaction in small groups: a review,” Image and Vision Computing, vol. 27, no. 12, pp. 1775–1787, 2009. View at Publisher · View at Google Scholar · View at Scopus
  88. M. Yang, R. Bock, D. Zhang et al., “do you like a cup of ‘coffee?’-the casia coffee house corpus,” in Proceedings of the 11th International Workshop on Multimodal Corpora (MMC '16), pp. 13–16, IEEE, Portoroz, Slovenia, May 2016.
  89. A. Vinciarelli, M. Pantic, and H. Bourlard, “Social signal processing: survey of an emerging domain,” Image and Vision Computing, vol. 27, no. 12, pp. 1743–1759, 2009. View at Publisher · View at Google Scholar · View at Scopus
  90. H. C. Traue, F. Ohl, A. Brechmann et al., “A framework for emotions and dispositions in man-companion interaction,” in Chapter in Coverbal Synchrony in Human-Machine Interaction, M. Rojc and N. Campbell, Eds., pp. 99–140, Science, New Hampshire, NH, USA, 2013. View at Google Scholar
  91. R. Akerkar, Big Data Computing, CRC Press, 2013. View at Publisher · View at Google Scholar
  92. A. C.-R. Tsai, C.-E. Wu, R. T.-H. Tsai, and J. Y.-J. Hsu, “Building a concept-level sentiment dictionary based on commonsense knowledge,” IEEE Intelligent Systems, vol. 28, no. 2, pp. 22–30, 2013. View at Publisher · View at Google Scholar · View at Scopus
  93. S. Poria, A. Gelbukh, A. Hussain, N. Howard, D. Das, and S. Bandyopadhyay, “Enhanced senticnet with affective labels for concept-based opinion mining,” Intelligent Systems, vol. 28, no. 2, pp. 31–38, 2013. View at Publisher · View at Google Scholar
  94. C. Hung and H.-K. Lin, “Using objective words in sentiwordnet to improve word-of-mouth sentiment classification,” IEEE Intelligent Systems, vol. 28, no. 2, pp. 47–54, 2013. View at Publisher · View at Google Scholar · View at Scopus
  95. C. Bosco, V. Patti, and A. Bolioli, “Developing corpora for sentiment analysis: the case of irony and senti-TUT,” IEEE Intelligent Systems, vol. 28, no. 2, pp. 55–63, 2013. View at Publisher · View at Google Scholar · View at Scopus
  96. A. McEwen and H. Cassimally, Designing the Internet of Things, John Wiley & Sons, New York, NY, USA, 2013.
  97. J. Bosch and P. M. Bosch-Sijtsema, “Softwares product lines, global development and ecosystems: collaboration in software engineering,” in Collaborative Software Engineering, I. Mistrík, J. Grundy, A. Hoek, and J. Whitehead, Eds., pp. 77–92, Springer, Berlin, Germany, 2010. View at Publisher · View at Google Scholar
  98. S. Jansen, A. Finkelstein, and S. Brinkkemper, “A sense of community: a research agenda for software ecosystems,” in Proceedings of the In 31st International Conference on Software Engineering-Companion, pp. 187–190, IEEE, Vancouver, Canada, May 2009.
  99. W3C OWL Working Group, “OWL 2 web ontology language: document overview,” W3C Recommendation, 2009, View at Google Scholar
  100. Stanford University, “Protégé,” April 2015,
  101. I. Horrocks, P. F. Patel-Schneider, H. Boley, S. Tabet, B. Grosof, and M. Dean, “Swrl: a semantic web rule language combining owl and ruleml,” W3C Member Submission, vol. 21, article 79, 2004. View at Google Scholar
  102. B. Parsia and E. Sirin, “Pellet: a practical OWL-DL reasoner,” in Proceedings of the 3rd International Semantic Web Conference-Poster, vol. 18, Citeceer, 2004.
  103. F. LAB, Fiware lab, April 2015,
  104. F. Ops, Fiware, April 2015,