image2
image3
image4
image5
image6
image7
image1

A User-Friendly Texture-Fitting Methodology for Virtual Humans

Sannier, G. and Magnenat-Thalmann, N.


Abstract: In this paper, we describe our techniques for the automatic cloning of a human face which can be animated in real-time using both video and audio inputs. Our system can be used for video conferencing and telecooperative work at various sites with shared virtual environment inhabited by virtual clones. This work is part of a European project VIDAS (AC057). A generic face model is used which is modified to fit to the real face. Primarily two aspects are considered: (1) modeling involving the construction of a 3D texture mapped face model fitting to the real face, (2) the animation of the new constructed face. Automatic texture fitting is employed to produce the virtual face with texture mapping from the real face image. A model-independent facial animation module provides real-time animation. The system allows integration of audio and video inputs and produces a synchronized visual and acoustic output.


@inproceedings{34,
  booktitle = {Computer Graphics International'97},
  author = {Sannier, G. and Magnenat-Thalmann, N.},
  title = {A User-Friendly Texture-Fitting Methodology for Virtual Humans},
  publisher = {IEEE Publisher},
  pages = {167},
  year = {1997}
}