The aim of this Swiss National Research Foundation Project coordinated by Prof. Nadia Magnenat-Thalmann is to investigate cross-modal interaction (i.e. visual, tactile, kinesthetic) with deformable objects simulated in 3D space. Different simulation models have been designed and implemented for each type of object and different modes of interaction, aimed at solving specific problems that arise in each context. These models are tailor made to simulate the look of twirls of strands of hair, forces involved in bounces of rubber balls or the touch of complex folds of drapery. Many of these models simulate real world physics approximately to gain speed advantages during interaction while others place a higher premium on being accurate during simulation. However, each of them is limited to their specific target domain. A realistic, believable virtual reality experience is characterized by the heterogeneity of its elements and interaction mechanisms. The approach of having many different methods for different objects and each mode of interaction increases the complexity and reduces performance when simulating complex scenarios composed of many kinds of objects. In spite of this, a comprehensive, unified physically-based model for real-time simulation of solid deformable objects for cross-modal interaction has never been attempted. Therefore, the question we try to answer is: what is common or invariant between the simulation models of deformable objects when we engage in cross-modal interactions with them – in the input parameters, the geometric models of representation or the simulation models themselves? We hope that the search for the answer to this question will lead us to find important insights into the design of a model for real-time simulation of all categories of deformable objects (cross category) that will be suitable for cross-modal interaction.
The research challenges of this project are embodied in the following research questions:
• Which parameters of deformable objects play a crucial role when we engage in cross-modal interaction with them? For e.g., we touch and see paper – what (measurable) properties of paper need to be captured or measured if try to reproduce this realistic sensation of interacting with virtual paper.
• Are there any common parameters that are always relevant?
• Can these common parameters be used to deduce invariants between the simulation models of objects, in 1D, 2D and 3D?
• Can such invariants be used to derive a cross-category model of simulation for deformable objects?
• How suitable is such a cross-category model to real-time simulation and rendering constraints?
• How much does the method of interaction affect these invariants? How do the invariants differ if we engage in unimodal (only haptic or only visual) or cross-modal interaction (haptic and visual interaction)?
• Does the modality of the final rendering (visual or haptic) dictate that the models of simulation be different? Can these different models be instantiated from a cross-category model? Can these different models be used together to recreate a coherent sensation of reality?
MIRALab, University of Geneva