Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Neural Scene Representations for View Synthesis

Abstract

View synthesis is the problem of using a given set of input images to render a scene from new points of view. Recent approaches have combined deep learning and volume rendering to achieve photorealistic image quality. However, these methods rely on a dense 3D grid representation that only allows for a small amount of local camera movement and scales poorly to higher resolutions.

In this dissertation, we present a new approach to view synthesis based on neural radiance fields, an efficient way to represent a scene as a continuous function parameterized by the weights of a neural network. In contrast to using a feed-forward neural network to predict scene properties from a small number of inputs, a neural radiance field can be directly optimized to globally reconstruct a scene from tens or hundreds of input images and thus achieve high quality novel view synthesis over a large camera baseline.

The key to enabling high fidelity reconstruction of a low-dimensional signal using a neural network is a high frequency mapping of the input coordinates into a higher-dimensional space. We explain the connection between this mapping and the neural tangent kernel, and show how manipulating the frequency spectrum of the mapping provides control over the network's interpolation behavior between supervision points.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View