Table of Contents Author Guidelines Submit a Manuscript
International Journal of Computer Games Technology
Volume 2008, Article ID 412056, 7 pages
Research Article

A Constraint-Based Approach to Visual Speech for a Mexican-Spanish Talking Head

Department of Computer Science, Faculty of Engineering, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield S1 4DP, UK

Received 30 September 2007; Accepted 21 December 2007

Academic Editor: Kok Wai Wong

Copyright © 2008 Oscar Martinez Lazalde et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


A common approach to produce visual speech is to interpolate the parameters describing a sequence of mouth shapes, known as visemes, where a viseme corresponds to a phoneme in an utterance. The interpolation process must consider the issue of context-dependent shape, or coarticulation, in order to produce realistic-looking speech. We describe an approach to such pose-based interpolation that deals with coarticulation using a constraint-based technique. This is demonstrated using a Mexican-Spanish talking head, which can vary its speed of talking and produce coarticulation effects.