SparseGS: Real-Time 360° Sparse View Synthesis using Gaussian Splatting
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

SparseGS: Real-Time 360° Sparse View Synthesis using Gaussian Splatting

Abstract

The problem of novel view synthesis has grown significantly in popularity recently withthe introduction of Neural Radiance Fields (NeRFs) and other implicit scene representation methods. A recent advance, 3D Gaussian Splatting (3DGS), leverages an explicit representation to achieve real-time rendering with high-quality results. However, 3DGS still requires an abundance of training views to generate a coherent scene representation. In few shot settings, similar to NeRF, 3DGS tends to overfit to training views, causing background collapse and excessive floaters, especially as the number of training views are reduced. This work proposes a method to enable training coherent 3DGS-based radiance fields of 360° scenes from sparse training views. Depth priors are integrated with generative and explicit constraints to reduce background collapse, remove floaters, and enhance consistency from unseen viewpoints. Experiments show that this method outperforms base 3DGS by 6.4% in LPIPS and by 12.2% in PSNR, and NeRF-based methods by at least 17.6% in LPIPS on the MipNeRF-360 dataset with substantially less training and inference cost. Project website at: https://tinyurl.com/sparsegs.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View