Table of Contents Author Guidelines Submit a Manuscript
Advances in Human-Computer Interaction
Volume 2017, Article ID 8962762, 14 pages
https://doi.org/10.1155/2017/8962762
Research Article

A Text-Based Chat System Embodied with an Expressive Agent

Department of Computer Science & Engineering, Chittagong University of Engineering & Technology, Chittagong 4349, Bangladesh

Correspondence should be addressed to Mohammed Moshiul Hoque; moc.oohay@hluihsom

Received 31 May 2017; Accepted 14 November 2017; Published 26 December 2017

Academic Editor: Carole Adam

Copyright © 2017 Lamia Alam and Mohammed Moshiul Hoque. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. K. Höök, A. Bullock, A. Paiva, M. Vala, R. Chaves, and R. Prada, “FantasyA and SenToy,” in Proceedings of the Conference on Human Factors in Computing Systems, CHI EA 2003, pp. 804-805, ACM Press, Lauderdale, Fla, USA, April 2003. View at Publisher · View at Google Scholar · View at Scopus
  2. S. M. Boker, J. F. Cohn, B.-J. Theobald, I. Matthews, T. R. Brick, and J. R. Spies, “Effects of damping head movement and facial expression in dyadic conversation using real-time facial expression tracking and synthesized avatars,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1535, pp. 3485–3495, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. M. El-Nasr, T. Ioerger, J. Yen, D. House, and F. Parke, “Emotionally expressive agents,” in Proceedings of the Computer Animation 1999, pp. 48–57, Geneva, Switzerland. View at Publisher · View at Google Scholar
  4. P. Ekman, E. R. Sorenson, and W. V. Friesen, “Pan-cultural elements in facial displays of emotion,” Science, vol. 164, no. 3875, pp. 86–88, 1969. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Helokunnas, Neural responses to observed eye blinks in normal and slow motion: an MEG study [M.S. Thesis], Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Finland, 2012.
  6. D. Kurlander, T. Skelly, and D. Salesin, “Comic chat,” in Proceedings of the 1996 Computer Graphics Conference, SIGGRAPH, pp. 225–236, August 1996. View at Scopus
  7. K. Nagao and A. Takeuchi, “Speech dialogue with facial displays: multimodal human-computer conversation,” in Proceedings of the Annual Meeting of the Association for Computational Linguistics, pp. 102–109, USA, June 1994. View at Publisher · View at Google Scholar
  8. H. H. Vilhjalmsson and J. Cassell, “BodyChat: Autonomous communicative behaviors in avatars,” in Proceedings of the 1998 2nd International Conference on Autonomous Agents, pp. 269–276, Minneapolis, USA, May 1998. View at Scopus
  9. A. B. Loyall and J. Bates, “Real-time control of animated broad agents,” in Proceedings of Conference of the Cognitive Science Society, USA, 1993.
  10. I. Mount, “Cranky consumer: testing online service reps,” The Wall Street Journal, 2005, https://www.wsj.com/articles/SB110721706388041791. View at Google Scholar
  11. J. Cassell, T. Bickmore, M. Billinghurst et al., “Embodiment in conversational interfaces: Rea,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 1999, pp. 520–527, USA, May 1999. View at Publisher · View at Google Scholar · View at Scopus
  12. L. Riccardo and C. Peiro, “A Facial Animation Framework with Emotive/Expressive Capabilities,” in Proceeding of IADIS International Conference Interfaces and Human Computer Interaction, pp. 49–53, Italy, 2011.
  13. K. Kramer, R. Yaghoubzadeh, S. Kopp, and K. Pitsch, “A conversational virtual human as autonomous assistant for elderly and cognitively impaired users? Social acceptability and design consideration,” in Lecture Notes in Informatics (LNI), vol. 220 of Series of the Gesellschaft für Informatik (GI), pp. 1105–1119, 2013. View at Google Scholar
  14. D. Ameixa, L. Coheur, P. Fialho, and P. Quaresma, “Luke, I am your father: Dealing with out-of-domain requests by using movies subtitles,” in Proceedings of the 14th International Conference, vol. 8637 of on Intelligent Virtual Agents (IVA), pp. 13–21, Boston, MA, USA, August 27–29, 2014. View at Publisher · View at Google Scholar · View at Scopus
  15. A. B. Youssef, M. Chollet, H. Jones, N. Sabouret, C. Pelachaud, and M. Ochs, “Towards a socially adaptive virtual agent,” in Proceedings of the 15th International Conference, vol. 9238 of on Intelligent Virtual Agents (IVA), p. 3, Delft, Netherlands, August 26–28, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. C. Straßmann, A. R. Von Der Pütten, R. Yaghoubzadeh, R. Kaminski, and N. Krämer, “The effect of an intelligent virtual agent’s nonverbal behavior with regard to dominance and cooperativity,” in Proceedings of the 16th International Conference on Intelligent Virtual Agents (IVA), vol. 10011, p. 28, Los Angeles, CA, USA, September 20–23, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. S. Kopp, L. Gesellensetter, N. C. Krämer, and I. Wachsmuth, “A conversational agent as museum guide: design and evaluation of a real-world application,” in Lecture Notes in Computer Science, T. Rist, T. Panayiotopoulos, J. Gratch, R. Aylett, D. Ballin, and P. Olivier, Eds., pp. 329–343, Springer-Verlag, London, UK, 2005. View at Google Scholar
  18. S. Vosinakis and T. Panayiotopoulos, “SimHuman: A Platform for Real-Time Virtual Agents with Planning Capabilities,” in Proceedings of the of International Workshop on Intelligent Virtual Agents (IVA), vol. 2190 of Lecture Notes in Computer Science, pp. 210–223, Madrid, Spain, September 10-11, 2001. View at Publisher · View at Google Scholar
  19. D. Formolo and T. Bosse, “A Conversational Agent that Reacts to Vocal Signals,” in Proceedings of the 8th International Conference on Intelligent Technologies for Interactive Entertainment (INTETAIN 2016), vol. 178, Springer International Publishing, Utrecht, Netherlands, June 28–30, 2016. View at Publisher · View at Google Scholar
  20. J. Gratch, A. Okhmatovskaia, F. Lamothe et al., “Virtual Rapport,” in Proceedings of the 6th International Conference of Intelligent Virtual Agents (IVA), vol. 4133, pp. 14–27, Marina Del Rey, CA; USA, August 21–23, 2006. View at Publisher · View at Google Scholar
  21. J.-W. Kim, S.-H. Ji, S.-Y. Kim, and H.-G. Cho, “A new communication network model for chat agents in virtual space,” KSII Transactions on Internet and Information Systems, vol. 5, no. 2, pp. 287–312, 2011. View at Publisher · View at Google Scholar · View at Scopus
  22. T. Weise, S. Bouaziz, H. Li, and M. Pauly, “Realtime performance-based facial animation,” ACM Transactions on Graphics, vol. 30, no. 4, p. 1, 2011. View at Publisher · View at Google Scholar
  23. https://www.charamel.com/en/solutions/avatar_live_chat_mocitotalk.html.
  24. L. Alam and M. M. Hoque, “The design of expressive intelligent agent for human-computer interaction,” in Proceedings of the 2nd International Conference on Electrical Engineering and Information and Communication Technology, iCEEiCT 2015, Bangladesh, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  25. E. Cambria, D. Das, S. Bandyopadhyay, and A. Feraco, “Affective computing and sentiment analysis,” in A Practical Guide to Sentiment Analysis, vol. 5 of Socio-Affective Computing, pp. 1–10, Springer, Cham, Switzerland, 2017. View at Publisher · View at Google Scholar
  26. P. Chandra and E. Cambria, “Enriching social communication through semantics and sentics,” in Proceedings of the Workshop on Sentiment Analysis where AI meets Psychology (SAAIP), pp. 68–72, Chiang Mai, Thailand, November 13, 2011.
  27. V. Anusha and B. Sandhya, “A learning based emotion classifier with semantic text processing,” Advances in Intelligent Systems and Computing, vol. 320, pp. 371–382, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. S. M. Mohammad, “Sentiment analysis: detecting valence, emotions, and other affectual states from text,” Emotion Measurement, pp. 201–237, 2016. View at Publisher · View at Google Scholar · View at Scopus
  29. W. Dai, D. Han, Y. Dai, and D. Xu, “Emotion recognition and affective computing on vocal social media,” Information Management, vol. 52, pp. 777–788, 2015. View at Publisher · View at Google Scholar
  30. S. Poria, E. Cambria, N. Howard, G.-B. Huang, and A. Hussain, “Fusing audio, visual and textual clues for sentiment analysis from multimodal content,” Neurocomputing, vol. 174, pp. 50–59, 2016. View at Publisher · View at Google Scholar · View at Scopus
  31. E. M. G. Younis, “Sentiment analysis and text mining for social media microblogs using open source tools: an empirical study,” International Journal of Computer Applications, vol. 112, no. 5, 2015. View at Google Scholar
  32. P. Ekman and W. Friesen, Unmasking the face: A Guide to Recognizing Emotions from Facial Clues, Prentice-Hall, 1975.
  33. MakeHuman. http://www.makehuman.org/.
  34. Blender. http://www.blender.org/.