NeRF-XL: Scaling NeRFs with Multiple GPUs
Distributes Neural Radiance Fields across multiple GPUs to enable training and rendering with a large capacity
The NeRF-XL method introduces a novel approach for distributing Neural Radiance Fields (NeRFs) across multiple GPUs to enable training and rendering of NeRFs with arbitrarily large capacities. Unlike existing multi-GPU methods that decompose scenes into independently trained NeRFs, NeRF-XL addresses fundamental issues hindering reconstruction quality improvements with additional computational resources. By jointly training multiple NeRFs across all GPUs, each covering a non-overlapping spatial region, NeRF-XL minimizes communication between GPUs during the forward pass, significantly reducing overhead.
The method boasts distributed training and rendering formulation that is mathematically equivalent to the single-GPU case, allowing for the training and rendering of NeRFs with an arbitrary number of parameters by utilizing more hardware. This approach eliminates model capacity redundancy by employing non-overlapping NeRFs, eliminating the need for blending during novel-view synthesis at inference time. Additionally, NeRF-XL utilizes shared per-camera embedding to ensure consistent camera optimization across the entire scene, enhancing rendering quality.
Unlike prior works that fail to realize performance improvements with additional GPUs, NeRF-XL demonstrates scaling laws for NeRFs in a multi-GPU setting, showing quality improvements with larger parameter counts and speed enhancements with more GPUs. By gracefully leveraging multiple GPUs without heuristics, NeRF-XL enables the training and rendering of NeRFs on a large scale, as demonstrated on various datasets, including the MatrixCity dataset covering a 25km^2 city area.
Comments
None