Multi-Modal Planning for Humanlike Motion Synthesis using Motion Capture
Planning the motions of a virtual character with high quality and control is a difficult challenge. Striking a balance between these two competing properties makes the problem particularly complex. While data-driven approaches produce high quality results due to the inherent realism of human motion capture data, planning algorithms are able to solve general continuous problems with a high degree of control. This dissertation addresses this overall problem with new techniques that combine the two approaches.
Three main contributions are proposed. First, a simple and efficient motion capture segmentation mechanism is proposed based on geometric features that introduces semantic information for organizing a motion capture database into a motion graph. The obtained feature-based motion graph has less nodes and increased connectivity, which leads to improved searches in speed and coverage when compared to the standard approach. In addition, feature-based motion graphs enable a novel inverse branch kinematic deformation technique to be executed efficiently, allowing solution branches to be deformed towards precise goals without degrading the quality of the results.
Second, in order to address speed of computation, precomputed motion maps are introduced for the interactive search and synthesis of locomotion sequences from unstructured feature-based motion graphs. Unstructured graphs can be successfully handled by relying on multiple maps and a search mechanism with backtracking information, which eliminates the need of manually creating fully connected move graphs. Precomputed motion maps can simultaneously search and execute motions in environments with many obstacles at interactive rates.
Finally, a multi-modal data-driven framework is proposed for task-oriented human-like motion planning, which combines data-driven methods with parameterized motion skills in order to achieve human motions that are realistic and that have a high degree of controllability. The multi-modal planner relies on feature-based motion graphs for achieving a high-quality locomotion skill and integrates generic, task-specific data-based or algorithmic motion primitive skills for precise upper-body manipulation and action planning. The approach includes a multi-modal search method where primitive motion skills compete for contributing to the final solution.
As a result, the overall proposed framework provides a high degree of control and, at the same time, retains the realism and human-likeness of motion capture data. Several examples are presented for synthesizing complex motions such as walking through doors, relocating books on shelves, etc.