- Main
Neural Reconstruction for Real-time Rendering
- Thomas, Manu Mathew
- Advisor(s): Forbes, Angus G
Abstract
Recent advancements in ray tracing hardware have shifted video game graphics towards more realistic effects such as soft shadows, reflections, and global illumination. These effects are achieved by tracing light rays through the scene, accumulating visibility and illumination components. However, due to the real-time constraints inherent in a game, we limit the number of rays/samples in a scene, causing visual artifacts, including aliasing and noise. A number of existing techniques take advantage of frame-to-frame coherence and reconstruct an image from a few samples spread over multiple frames but require the construction of handcrafted heuristics for the accumulation of samples. This results in ghosting artifacts, loss of detail, and temporal instability.
While machine learning-based approaches have shown promise in image reconstruction for offline rendering, they are expensive for games and other interactive media. Using reduced-precision arithmetic to quantize the neural networks can drastically reduce both their computation and storage requirements. However, the use of quantized networks for HDR reconstruction can cause significant quality degradation.
Our work introduces QW-Net, a neural network for HDR image reconstruction in which 95% of the computations are performed with 4-bit integer operations. We then demonstrate the capability of this network for supersampling and denoising tasks suitable for real-time rendering. Finally, we combine the idea of supersampling and denoising into a single network amortizing the cost of separate passes. In addition, we also perform super-resolution to further reduce the overall rendering cost. Our network outperforms the state-of-the-art real-time reconstruction techniques in term of quality and performance.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-