Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction
Authors: Anagh Malik, Parsa Mirdehghan, Sotiris Nousias, Kyros Kutulakos, David Lindell
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on a first-of-its-kind dataset of simulated and captured transient multiview scans from a prototype single-photon lidar. |
| Researcher Affiliation | Academia | Anagh Malik1,2 anagh@cs.toronto.edu Parsa Mirdehghan1,2 parsa@cs.toronto.edu Sotiris Nousias1 sotiris@cs.toronto.edu Kiriakos N. Kutulakos1,2 kyros@cs.toronto.edu David B. Lindell1,2 lindell@cs.toronto.edu 1University of Toronto 2Vector Institute |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | A description of the full set of simulated and captured scenes is included in the supplemental, and the dataset and simulation code are publicly available on the project webpage. |
| Open Datasets | Yes | A description of the full set of simulated and captured scenes is included in the supplemental, and the dataset and simulation code are publicly available on the project webpage. |
| Dataset Splits | No | The training views are set consistent with the capture setup of our hardware prototype (described below) such that the camera viewpoint is rotated around the scene at a fixed distance and elevation angle, resulting in 8 synthetic lidar scans used for training. We evaluate on rendered measurements from six viewpoints sampled from the Ne RF Blender test set [49]." and "We set aside 10 views sampled in 36 degree increments for testing and we use 8 of the remaining views for training." No explicit validation split information is provided. |
| Hardware Specification | Yes | We train the network on a single NVIDIA A40 GPU. |
| Software Dependencies | No | Our implementation is based on the Nerf Acc [31] version of Instant-NGP [30], which we extend to incorporate our time-resolved volume rendering equation." Specific version numbers for software dependencies are not provided. |
| Experiment Setup | Yes | We optimize the network using the Adam optimizer [53], a learning rate of 1 10 3 and a multi-step learning rate decay of γ = 0.33 applied at 100K, 150K, and 180K iterations. We set the batch size to 512 pixels and optimize the simulated results until they appear to converge, or for 250K iterations for the simulated results and 150K iterations for the captured results. For the weighting of the space carving loss, we use λsc = 10 3 for the simulated dataset and increase this to λsc = 10 2 for captured data, which benefits from additional regularization. |