Skip to main content
eScholarship
Open Access Publications from the University of California

Immersive Virtual Human Training Systems based on Direct Demonstration

  • Author(s): Camporesi, Carlo
  • Advisor(s): Kallmann, Marcelo
  • et al.
Creative Commons Attribution 4.0 International Public License
Abstract

Virtual humans have great potential to become as effective as human trainers in monitored, feedback-based, virtual environments for training and learning. Thanks to recent advances on motion capture devices and stereoscopic consumer displays, animated virtual characters can now realistically interact with users in a variety of applications. Interactive virtual humans are in particular suitable for training systems where human-oriented motion skills or human-conveyed information are key to the learning material.

This dissertation addresses the challenge of designing such training systems with the approach of motion modeling by direct demonstration and relying on immersive motion capture interfaces. In this way, experts in a training subject can directly demonstrate the needed motions in an intuitive way, until achieving the desired results.

An immersive full-scale motion modeling interface is proposed for enabling users to model generic parameterized actions by direct demonstration. The proposed interface is based on aligned clusters of example motions, which can be interactively built until coverage of the target environment. After demonstrating the needed motions, the virtual trainer is then able to synthesize motions that are similar to the provided examples and at the same time are parameterized to generic targets and constraints. Hence, autonomous virtual trainers can subsequently reproduce the motions in generic training environments with apprentice users learning the training subject. The presented systems were implemented in a new development middleware that is scalable to different hardware configurations, from low-cost solutions to multi-tile displays, and it is designed to support distributed collaborative immersive virtual environments with streamed full-body avatar interactions.

An immersive full-scale motion modeling interface is proposed for enabling users to model generic parameterized actions by direct demonstration. The proposed interface is based on aligned clusters of example motions, which can be interactively built until coverage of the target environment. After demonstrating the needed motions, the virtual trainer is then able to synthesize motions that are similar to the provided examples and at the same time are parameterized to generic targets and constraints. Hence, autonomous virtual trainers can subsequently reproduce the motions in generic training environments with apprentice users learning the training subject. The presented systems were implemented in a new development middleware that is scalable to different hardware configurations, from low-cost solutions to multi-tile displays, and it is designed to support distributed collaborative immersive virtual environments with streamed full-body avatar interactions.

Given the several possible configurations for the proposed systems, this dissertation also analyzes the effectiveness of virtual trainers with respect to different choices on display size, use of avatars, and use of user-perspective stereo vision. Several experiments were performed to collect motion data during task performance under different configurations. These experiments expose and quantify the benefits of using stereo vision and avatars in motion reproduction tasks and show that the use of avatars improves the quality of produced motions. In addition, the use of avatars produced increased attention to the avatar space, allowing users to better observe and address motion constraints and qualities with respect to virtual environments. However, direct interaction in user-perspective leads to tasks executed in less time and to targets more accurately reached. These and other trade-offs were quantified and performed in conditions not investigated before.

Finally, the proposed concepts were applied for the practical development of tools for delivering monitored upper-body physical therapy. New methods for exercise modeling, parameterization, and adaptation are presented in order to allow therapists to intuitively create, edit and re-use customized exercise programs that are responsive and adaptive to the needs of their patients. The proposed solutions were evaluated by therapists and demonstrate the suitability of the approach.

Main Content
Current View