Table of Contents Author Guidelines Submit a Manuscript
Advances in Human-Computer Interaction
Volume 2017, Article ID 8962762, 14 pages
https://doi.org/10.1155/2017/8962762
Research Article

A Text-Based Chat System Embodied with an Expressive Agent

Department of Computer Science & Engineering, Chittagong University of Engineering & Technology, Chittagong 4349, Bangladesh

Correspondence should be addressed to Mohammed Moshiul Hoque; moc.oohay@hluihsom

Received 31 May 2017; Accepted 14 November 2017; Published 26 December 2017

Academic Editor: Carole Adam

Copyright © 2017 Lamia Alam and Mohammed Moshiul Hoque. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Life-like characters are playing vital role in social computing by making human-computer interaction more easy and spontaneous. Nowadays, use of these characters to interact in online virtual environment has gained immense popularity. In this paper, we proposed a framework for a text-based chat system embodied with a life-like virtual agent that aims at natural communication between the users. To achieve this kind of system, we developed an agent that performs some nonverbal communications such as generating facial expression and motions by analyzing the text messages of the users. More specifically, this agent is capable of generating facial expressions for six basic emotions such as happy, sad, fear, angry, surprise, and disgust along with two additional emotions, irony and determined. Then to make the interaction between the users more realistic and lively, we added motions such as eye blink and head movements. We measured our proposed system from different aspects and found the results satisfactory, which make us believe that this kind of system can play a significant role in making an interaction episode more natural, effective, and interesting. Experimental evaluation reveals that the proposed agent can display emotive expressions correctly 93% of the time by analyzing the users’ text input.