Abstract:
|
This paper presents a complete framework to automatically generate synthetic image sequences by designing and simulating complex human behaviors in virtual environments. Given an initial state of a virtual agent, a simulation process generates posterior synthetic states by means of precomputed human motion and behavior models, taking into account the relationships of such an agent w.r.t its environment at each frame step. The resulting status sequence is further visualized into a virtual scene using a 3D graphic engine. Conceptual knowledge about human behavior patterns is represented using the Situation Graph Tree formalism and a rule-based inference system called F-Limette. Results obtained are very helpful for testing human interaction with real environments, such as a pedestrian crossing scenario, and for virtual storytelling, to automatically generate animated sequences. |