Emotion-aware human-computer interaction has been in the forefront of research and development interest for some time now and many applications integrating concepts of the related theory gradually make it to the users. This interdisciplinary field encompasses concepts from a wide variety of research themes, ranging from psychology and cognition theory to signal processing and understanding, as well as evaluation and software engineering and design. It is the blending and cross-fertilization of those concepts which in the end makes emotion-aware computing and robotic systems perform better and offer a richer experience when humans interact with them.

This special issue was initiated by our joint work in the framework of the Humaine Network of Excellence, which was transformed to the Humaine Association [1] after the end of the funding period. More than thirty partners from Europe and the U.S. participated in Humaine, providing the seeds for a number of research and development projects on affective computing, which maintain the high quality research in the field. The issue consists of five papers, dealing with emotion recognition, emotion-oriented architectures, and assistive computing.

The paper titled “The SEMAINE API: Towards a standards-based framework for building emotion-oriented systems” by Marc Schroeder presents the SEMAINE API, an open source framework for building emotion-oriented systems. By encouraging and simplifying the use of standard representation formats, the presented work aims to contribute to interoperability and reuse of system components in the research community. An interactive Sensitive Artificial Listener built within the framework of the Semaine EU project is presented as an example of a full-scale system built on top of this API. Three small example systems are described in detail to illustrate how integration between existing and new video and speech analysis components is realized with minimal effort. Schroeder concludes that if several research teams were to bring their work into a common technological framework, such as the one presented in this paper, this would be likely to speed up the consolidation process, because challenges to integration would become apparent more quickly.

Data preprocessing for speech-based emotion recognition is the subject of the paper titled “Segmenting into adequate units for automatic recognition of emotion-related episodes: a speech-based approach” by Anton Batliner et al. Authors work on a database with children’s emotional speech to illustrate their approach on segmenting emotion-related (emotional or affective) episodes into adequate units for analysis and automatic processing and classification. Using word-based annotations and the subsequent mapping onto different types of higher units, Batliner et al. report classification performances for an exhaustive modeling of this data onto three classes representing valence (positive, neutral, negative), and onto a fourth rest (garbage) class.

Driver assistance is the subject of the next paper titled “Emotion on the road-necessity, acceptance, and feasibility of affective computing in the car” by Florian Eyben et al. Authors mention that the ability of a car to understand natural speech and provide a human-like driver assistance system can be expected to be a decisive factor for market success on par with automatic driving systems. Starting with an extensive literature overview of work related to emotions and driving, as well as automatic recognition and control of emotions, Eyben et al. describe various use-case scenarios as possible applications for emotion-oriented technology in a car. Acceptance from the part of the drivers of such technology is evaluated with a Wizard-Of-Oz study, while feasibility of monitoring driver attentiveness is demonstrated by a real-time experiment.

The fourth paper of this special issue deals with interactions taking place in the Second Life virtual world. In “EmoHeart: Conveying emotions in second life based on affect sensing from text”, authors Alena Neviarouskaya et al. look at effect sensing from text, which enables automatic expression of emotions in the virtual environment, as a method to avoid manual control by the user and to enrich remote communications effortlessly. A lexical rule-based approach to recognition of emotions from text is described, the results of which trigger animations of avatar facial expressions and visualize emotion by heart-shaped textures. Authors report promising results in fine-grained emotion recognition in real examples of online conversation in both their own, as well as an existing corpus.

The final paper of this special issue is titled “Emotional communication in finger braille”. Here, authors Yasuhiro Matsuda et al. describe analyses of the features of three emotion classes (joy, sadness, and anger) expressed by Finger Braille interpreters and examine the effectiveness of emotional expression and emotional communication between people unskilled in Finger Braille, targeting the development of a Finger Braille system to teach emotional expression and a system to recognize emotion. Their results indicate that code duration and finger load are correlated with some of the emotion classes and, based on the analysis of effectiveness of emotional expression and emotional communication between unskilled users, expression and communication of emotions using Finger Braille is both feasible and comprehensible.

We believe that this special issue details concepts and implementations from a wide variety of effect- and emotion-related applications, effectively illustrating research challenges and efforts taken to tackle them. Finally, we would like to thank all authors and reviewers and also the Advances in Human-Computer Interaction Journal for hosting this special issue.

Kostas Karpouzis
Elisabeth Andre
Anton Batliner