VIDAS – “VIDeo ASsisted with audio coding and representation”

VIDAS – “VIDeo ASsisted with audio coding and representation”

Period: September 1995 - March 1999 Type: European Research Status: Completed

Overview

The main objective of the project is to improve the quality of audio-visual communications (videophone” on low bit rates. In particular, the part of the project where MIRALab is involved is concerned with hybrid coding algorithms. In hybrid coding, the image of the speaker’s face is analyzed, features are tracked andparametrized, and the parameters transferred over the network to the facial model that is used to synthesize the face with the appropriate expressions. The rest of the image (background” is coded using more classical region based techniques.

VIDAS is supposed to play a major role within the SNHC (Synthetic/Natural Hybrid Coding” group of MPEG-4. So one of our activities within the project is also the participation in MPEG-SNHC meetings and submission of some proposals to SNHC together with other partners.

MIRALab’s contribution

MIRALab’s main task within the project is to provide the parametrized facial model that can be animated using facial feature parameters or using phonemes coming from speech recognition. The methods must be provided to make the generic model look like the particular user in front of the camera. This automatic adaptation must be achieved based on feature extraction data (provided by other partners in the project” and texture image.

Partners

Ecole Polytechnique Féderale de Lausanne, Virtual Reality Laboratory (VRLab)
Switzerland
vrlab.epfl.ch

INRIA
France
www.inria.fr

Linkoping University
Sweden
www.bk.isy.liu.se

Matra Communication
France
www.matracom.com

MIRALab, University of Geneva
Switzerland
www.miralab.ch

Modis SPA
Italy

Philips-LEP
France
www.philips.fr

Università di Genova – Dipartimento di Informatica Sistemistica e Telematica
Italy
www.dist.unige.it

UPC, Univesitat Politecnica de Catalunya
Spain
www-tsc.upc.es