GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration

Authors: Silong Yong, Yaqi Xie, Simon Stepputtis, Katia Sycara

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the use of the Gauss-Laguerre quadrature, we conduct an empirical study on the landscape of color function. We also analyze the relationship between our approach and other techniques that aim to reduce the sample points for Ne RF. We demonstrate the plug-and-play property of our method by directly incorporating it into vanilla Ne RF and Tenso RF models that are already trained on Ne RF-Synthetic and LLFF datasets. Furthermore, we showcase the drop in time and memory usage as a direct outcome of reducing the computational cost.
Researcher Affiliation Academia Silong Yong Yaqi Xie Simon Stepputtis Katia Sycara Carnegie Mellon University {silongy, yaqix, sstepput, sycara}@andrew.cmu.edu
Pseudocode Yes Algorithm 1 Gauss Laguerre Quadrature for Volume Rendering
Open Source Code Yes Code can be found in project page https://silongyong.github.io/GL-Ne RF_project_page/.
Open Datasets Yes We evaluate our method on the standard datasets: Ne RFSynthetic and Real Forward Facing Dataset(LLFF) [26] as in [27] with two different models, i.e. Vanilla Ne RF [27], Tenso RF [6] and Instant NGP [28].
Dataset Splits No The paper mentions training and testing on datasets like Ne RF-Synthetic and LLFF, and specifies sample counts for coarse and fine networks. However, it does not explicitly define specific validation dataset splits (e.g., percentages or exact counts for a validation set) needed for reproduction.
Hardware Specification Yes Each scene is trained for 200k iterations using a single NVIDIA RTX 6000 GPU. The result is collected from an AMD Ryzen 9 5900HS CPU.
Software Dependencies No The paper mentions “Ne RF-Py Torch [45], a reproducible Py Torch implementation of the original Ne RF [27]” but does not specify version numbers for PyTorch or other critical software libraries (e.g., Python, CUDA) required for replication.
Experiment Setup Yes We use a learning rate of 5 10 4 that exponentially decays to 5 10 5 over the course of optimization. Each scene is trained for 200k iterations using a single NVIDIA RTX 6000 GPU. We use 128 coarse samples and 32 fine samples to test our method.