Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Text-to-3D Scene Generation with Inpainting and Depth Diffusion Priors

Abstract

We introduce RealmDreamer, a technique for generation of forward-facing 3D scenes from text descriptions. Our method optimizes a 3D Gaussian Splatting representation to match complex text prompts using pretrained diffusion models. Our key insight is to leverage 2D inpainting diffusion models conditioned on an initial scene estimate to provide low variance and high-fidelity estimates of unknown regions during 3D distillation. In conjunction, we imbue correct geometry with geometric distillation from a depth diffusion model, conditioned on samples from the inpainting model. We find that the initialization of the optimization is crucial, and provide a principled methodology for doing so. Notably, our technique does not require video or multi-view data and can synthesize a variety of high-quality 3D scenes in different styles with complex layouts. Further, the generality of our method allows 3D synthesis from a single image. As measured by a comprehensive user study, our method outperforms all existing approaches, preferred by 88-95%.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View