Interactive motion planning with motion capture data
- Author(s): Lo, Wan-Yen
- et al.
Realistic character motion is an important component in media production, such as movies and video games. More lifelike characters enhance storytelling and immersive experience. To date, the most common approach to offer high degree of realism is based on large databases of motion capture data. The motion capture process, however, is expensive and time-consuming, while only a limited number and range of motions can be captured at a time. As a consequence, realistic motion synthesis has become a core research topic in computer animation. Many of the most successful techniques are based on fragmenting and recombining motion capture data. The connectivity among the motion fragments is encoded with a graph structure, and novel motions can be generated with graph traversals. In addition, most systems allow a user to provide a number of constraints to specify the desired motion. By formulating the constraints as a cost function, motion synthesis is cast as a graph search problem, and the optimally-synthesized motion corresponds to the path through the graph that minimizes the total cost. The search complexity for an optimal or near-optimal solution, however, is exponential to the connectivity of the graph and the length of the desired motion sequence. Synthesizing optimal or near-optimal motions is thus challenging for interactive applications. In this dissertation, we explore the two most significant research directions toward near-optimal motion synthesis, including graph search and reinforcement learning, and present algorithms for interactive and real-time character animation. This dissertation begins by reviewing previous work on searching motion graphs. In particular, A* search is optimally efficient and considered the state-of-the-art technique for optimal motion synthesis. However, applying A* search on motion graphs is challenging when interactive performance is demanded. To make A* search more applicable to interactive applications, we present a bidirectional search algorithm to improve the search efficiency while preserving the search quality. This can reduce the maximal search depth by almost a factor of two, leading to significant performance improvements. We further demonstrate its application to interactive motion synthesis using an intuitive sketching interface. The second part of the dissertation consists of reinforcement learning frameworks for real-time character animation. The character controller makes near-optimal decisions in response to user input in real-time. The controller is constructed in a pre-process by exploring all possible situations. We introduce a tree-based regression algorithm, which is more efficient and robust than previous strategies for learning controllers. In addition, we extend the learning framework to include parameterized motions and interpolation for precise motion control. Finally, we show how to leverage character controllers by letting the character "see'' the environment directly with depth perception. We derive a hierarchical state model and a regression algorithm to avoid the curse of dimensionality resulting from raw vision input. The controller can be generalized to allow a character to navigate or survive in environments containing arbitrarily shaped obstacles, which is hard to achieve with previous reinforcement learning frameworks