Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

RealityFlythrough : a system for ubiquitous video

Abstract

We are rapidly moving toward a world of ubiquitous video where personal networked video cameras are everywhere. Already, camera-equipped cell phones are becoming commonplace. Imagine being able to tap into all of these live video feeds to remotely explore the world in real- time. We introduce RealityFlythrough, a telepresence system that makes this vision possible. By situating live 2d video feeds in a 3d model of the world, RealityFlythrough allows any space to be explored remotely. No special cameras, tripods, rigs, scaffolding, or lighting is required to create the model, and no lengthy preprocessing of images is necessary. Rather than try to achieve photorealism at every point in space, we instead focus on providing the user with a sense of how the video streams relate to one another spatially. By providing cues in the form of dynamic transitions and by stitching together live and archived views of the scene, we can approximate photorealistic telepresence while harnessing cameras "in the wild." This dissertation describes how to construct a system like RealityFlythrough and explores the issues with deploying such a system in a real-world setting where limits in wireless bandwidth place constraints on the quality and number of mobile video feeds.The primary contribution of this dissertation is a demonstration that with the appropriate division of labor between the human brain and the computer, some computationally intractable problems can be solved. In particular, this dissertation demonstrates that the computationally intensive task of stitching multiple video feeds into a cohesive whole can be avoided by having the computer perform a relatively simple rough stitch that is accurate enough to allow the human brain to complete the process. This dissertation also makes the following contributions to the field of telepresence: 1) A functioning system is presented and studied in a series of user experiments, 2) a robust system architecture is described that has successfully adapted to a series of unanticipated changes, 3) a novel visualization technique is presented that reduces the computational requirements that have so far posed a barrier to live, real-time image synthesis in real-world environments, and 4) this same visualization technique is shown to reduce the negative effects of low-frame-rate video

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View