STAR aims at mixing images of real environments, virtual objects and virtual humans to produce mixed reality animations of an existing environment. It will investigate its potential use in the industrial world. At the same time, it will strive to provide factories with new digital tools to facilitate the planning and carrying out of routine maintenance inspections and revamp procedures. Such tools will also contribute to putting together easy-to-understand procedure manuals.
STAR will focus on enhancing the quality of communications in the workplace between individuals or small groups, by creating mixed-reality animations of the procedures being discussed. Using mixed reality techniques instead of virtual reality techniques, the quality of the animations can be made to approach that of documentaries with much higher production costs (camera crew, editor, etc.). Also mixing real parts of a scene with virtual ones is more cost-effective than the creation and rendering of wholly virtual ones.
STAR, the Service Training through Augmented Reality project, will therefore focus on developing Augmented Reality techniques with a view to developing commercial products for training, on-line documentation and planning purposes. With its two industrial partners, STAR will focus on developing the technology so that it is applicable to the industrial world.
System integration (WP7 Leader). The real-time Virtual Reality platform of STAR will be an object oriented modular software platform providing a necessary set of programming tools facilitating consistent development of applications dedicated to author and perform audio-visual interactive simulation of 3D virtual environment (VE) with virtual humans (VH) and smart object. VE is assumed to be a collection of spatially localized elements possessing appropriate and distinguishable properties. VE simulation is assumed to be a time-based representation of subsequent transitions reflecting mutual and dynamic relationships between the VE elements and the VH.
Interaction between virtual humans and deformable objects (WP6). We will design and implement a general geometrical correction method for enforcing collisions and other geometrical constraints between polygonal mesh surfaces, that is, deformable objects in a VR environment. It will be based on a global resolution scheme that takes advantage of an efficient use of the conjugate gradient algorithm to find the appropriate displacement of the mesh vertices that would satisfy all the constraints simultaneously, according to momentum conservation laws. This method has been implemented in a cloth simulation system along with a collision response model that enforces a minimum “thickness” distance between cloth surfaces, which can be efficiently integrated in an AR based on implicit integration. We plan to develop this software as a 3D graphics independent software library. That would allow us to seamlessly integrate it as a module, in an extra layer, on top of the core STAR augmented reality platform.
Delft University of Technology
Ecole Polytechnique Féderale de Lausanne, Virtual Reality Laboratory (VRLab)
Katholieke Universiteit Leuven
MIRALab, University of Geneva
Siemens Corporate Research