Table of Contents Author Guidelines Submit a Manuscript
Advances in Multimedia
Volume 2013 (2013), Article ID 175745, 21 pages
Research Article

Real-Time Audio-Visual Analysis for Multiperson Videoconferencing

1Idiap Research Institute, 1920 Martigny, Switzerland
2Université de Lyon, CNRS, INSA-Lyon, LIRIS, UMR5205, 69621 Lyon, France
3Fraunhofer IIS, 91058 Erlangen, Germany

Received 28 February 2013; Accepted 21 June 2013

Academic Editor: Alexander Loui

Copyright © 2013 Petr Motlicek et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We describe the design of a system consisting of several state-of-the-art real-time audio and video processing components enabling multimodal stream manipulation (e.g., automatic online editing for multiparty videoconferencing applications) in open, unconstrained environments. The underlying algorithms are designed to allow multiple people to enter, interact, and leave the observable scene with no constraints. They comprise continuous localisation of audio objects and its application for spatial audio object coding, detection, and tracking of faces, estimation of head poses and visual focus of attention, detection and localisation of verbal and paralinguistic events, and the association and fusion of these different events. Combined all together, they represent multimodal streams with audio objects and semantic video objects and provide semantic information for stream manipulation systems (like a virtual director). Various experiments have been performed to evaluate the performance of the system. The obtained results demonstrate the effectiveness of the proposed design, the various algorithms, and the benefit of fusing different modalities in this scenario.