The main objective of the project is to improve the quality of audio-visual communications (videophone” on low bit rates. In particular, the part of the project where MIRALab is involved is concerned with hybrid coding algorithms. In hybrid coding, the image of the speaker’s face is analyzed, features are tracked andparametrized, and the parameters transferred over the network to the facial model that is used to synthesize the face with the appropriate expressions. The rest of the image (background” is coded using more classical region based techniques.
VIDAS is supposed to play a major role within the SNHC (Synthetic/Natural Hybrid Coding” group of MPEG-4. So one of our activities within the project is also the participation in MPEG-SNHC meetings and submission of some proposals to SNHC together with other partners.
MIRALab’s main task within the project is to provide the parametrized facial model that can be animated using facial feature parameters or using phonemes coming from speech recognition. The methods must be provided to make the generic model look like the particular user in front of the camera. This automatic adaptation must be achieved based on feature extraction data (provided by other partners in the project” and texture image.
Ecole Polytechnique Féderale de Lausanne, Virtual Reality Laboratory (VRLab)
MIRALab, University of Geneva
Università di Genova – Dipartimento di Informatica Sistemistica e Telematica
UPC, Univesitat Politecnica de Catalunya