Table of Contents Author Guidelines Submit a Manuscript
International Journal of Computer Games Technology
Volume 2008 (2008), Article ID 412056, 7 pages
http://dx.doi.org/10.1155/2008/412056
Research Article

A Constraint-Based Approach to Visual Speech for a Mexican-Spanish Talking Head

Department of Computer Science, Faculty of Engineering, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield S1 4DP, UK

Received 30 September 2007; Accepted 21 December 2007

Academic Editor: Kok Wai Wong

Copyright © 2008 Oscar Martinez Lazalde et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. F. I. Parke and K. Waters, Computer Facial Animation, A K Peters, Wellesley, Mass, USA, 1996.
  2. A. Löfqvist, “Speech as audible gestures,” in Speech Production and Speech Modeling, W. J. Hardcastle and A. Marchal, Eds., pp. 289–322, Kluwer Academic Press, Dordrecht, The Netherlands, 1990. View at Google Scholar
  3. M. Cohen and D. Massaro, “Modeling coarticulation in synthetic visual speech,” in Proceedings of the Computer Animation, pp. 139–156, Geneva, Switzerland, June 1993.
  4. J. Edge, Techniques for the synthesis of visual speech, M.S. thesis, University of Sheffield, Sheffield, UK, 2005.
  5. J. Edge and S. Maddock, “Constraint-based synthesis of visual speech,” in Proceedings of the 31st International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '04), p. 55, Los Angeles, Calif, USA, August 2004. View at Publisher · View at Google Scholar
  6. A. Witkin and M. Kass, “Spacetime constraints,” in Proceedings of the 15th International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '88), pp. 159–168, Atlanta, Ga, USA, August 1988. View at Publisher · View at Google Scholar
  7. O. M. Lazalde, S. Maddock, and M. Meredith, “A Mexican-Spanish talking head,” in Proceedings of the 3rd International Conference on Games Research and Development (CyberGames '07), pp. 17–24, Manchester Metropolitan University, UK, September 2007.
  8. P. E. Gill, W. Murray, and M. Wright, Practical Optimisation, Academic Press, Boston, Mass, USA, 1981.
  9. D. W. Massaro, Perceiving Talking Faces: From Speech Perception to a Behavioral Principle, The MIT Press, Cambridge, Mass, USA, 1998.
  10. B. Dodd and R. Campbell, Eds., Hearing by Eye: The Psychology of Lipreading, Lawrence Erlbaum, London, UK, 1987.
  11. A. M. Tekalp and J. Ostermann, “Face and 2-D mesh animation in MPEG-4,” Signal Processing: Image Communication, vol. 15, no. 4, pp. 387–421, 2000. View at Publisher · View at Google Scholar
  12. A. Black, P. Taylor, and R. Caley, “The Festival speech synthesis System,” 2007, http://www.cstr.ed.ac.uk/projects/festival/.
  13. S. Kshirsagar, T. Molet, and N. Magnenat-Thalmann, “Principal components of expressive speech animation,” in Proceedings of the International Conference on Computer Graphics (CGI '01), pp. 38–44, Hong Kong, July 2001. View at Publisher · View at Google Scholar
  14. S. Kshirsagar, S. Garchery, G. Sannier, and N. Magnenat-Thalmann, “Synthetic faces: analysis and applications,” International Journal of Imaging Systems and Technology, vol. 13, no. 1, pp. 65–73, 2003. View at Publisher · View at Google Scholar
  15. S. Kshirsagar and N. Magnenat-Thalmann, “Visyllable based speech animation,” Computer Graphics Forum, vol. 22, no. 3, pp. 631–639, 2003. View at Publisher · View at Google Scholar
  16. A. P. Benguerel and H. A. Cowan, “Coarticulation of upper lip protrusion in French,” Phonetica, vol. 30, no. 1, pp. 41–55, 1974. View at Google Scholar