NeRF Revisited: Fixing Quadrature Instability in Volume Rendering
Authors: Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Thomas, Leonidas J. Guibas, Ke Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on the standard datasets: Blender and Real Forward Facing (LLFF) datasets as used in [18]. We use the released training and test splits for each. See supplementary for more details. For quantitative comparison, we follow the standard evaluation metrics and report PSNR, SSIM [32] and LPIPS [41] on unseen test views. |
| Researcher Affiliation | Collaboration | Mikaela Angelina Uy1 George Kiyohiro Nakayama1 Guandao Yang1,2 Rahul Krishna Thomas1 Leonidas Guibas1 Ke Li3,4 1Stanford University 2Cornell University 3Simon Fraser University 4Google |
| Pseudocode | Yes | Figure A4: Code snippet for volume rendering. The implementation for our piecewise linear opacity approximation is a drop-in replacement from the original piecewise constant. Figure A5: Code snippet for our Precise Importance Sampling. The implementation of our precision importance sampling is also a direct replacement from the original function from the constant implementation called sample_pdf (See next figure for reference). Figure A6: This is the original importance sampling for the constant approximation for reference. |
| Open Source Code | No | The paper mentions a project page (pl-nerf.github.io) which contains a video demo, but does not explicitly state that the source code for their methodology is available there. It also mentions building upon 'Ne RF-Pytorch [38]' which is a third-party implementation, not their own source code for PL-Ne RF. |
| Open Datasets | Yes | We evaluate our method on the standard datasets: Blender and Real Forward Facing (LLFF) datasets as used in [18]. |
| Dataset Splits | No | The paper states, "We use the released training and test splits for each." (Section 7.1) but does not explicitly provide details for a validation split, such as its size, percentage, or how it's used. |
| Hardware Specification | Yes | We train each scene for 500k iterations which takes 21 hours on a single Nvidia V100 GPU 3. Our precision importance sampling enables us to use fewer samples for the fine network, hence keeping the total number of rendering samples the same, we use 128 coarse samples and 64 fine samples to train and test our method. We use Nvidia v100 and A5000 GPU s for our experiments. |
| Software Dependencies | No | The paper mentions software like 'Pytorch' and 'Nerf Studio [27]' as foundational implementations, but it does not specify their version numbers, nor does it list other software dependencies with specific version numbers. |
| Experiment Setup | Yes | We use a batch size of 1024 rays and a learning rate of 5 10 4 that decays exponentially to 5 10 5 throughout the course of optimization. We train each scene for 500k iterations which takes 21 hours on a single Nvidia V100 GPU 3. Our precision importance sampling enables us to use fewer samples for the fine network, hence keeping the total number of rendering samples the same, we use 128 coarse samples and 64 fine samples to train and test our method. |