GeCoNeRF: Few-shot Neural Radiance Fields via Geometric Consistency

Authors: Min-Seop Kwak, Jiuhn Song, Seungryong Kim

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model on Ne RF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019). Ne RF-Synthetic is a realistically rendered 360 synthetic dataset comprised of 8 scenes. We randomly sample 3 viewpoints out of 100 training images in each scene, with 200 testing images for evaluation. We also conduct experiments on LLFF benchmark dataset, which consists of real-life forward facing scenes. Following Reg Ne RF (Niemeyer et al., 2022), we apply standard settings by selecting test set evenly from list of every 8th image and selecting 3 reference views from remaining images. We quantify novel view synthesis quality using PSNR, Structural Similarity Index Measure (SSIM) (Wang et al., 2004), LPIPS perceptual metric (Zhang et al., 2018) and an average error metric introduced in (Barron et al., 2021) to report the mean value of metrics for all scenes in each dataset.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Korea University, Seoul, Korea.
Pseudocode Yes Algorithm 1 Ge Co Ne RF Framework
Open Source Code No The paper mentions building upon an existing codebase 'pytorch-Ne RF' with a provided GitHub link for that codebase, but does not explicitly state that the authors' own code for Ge Co Ne RF is open-sourced or provide a link to it.
Open Datasets Yes We evaluate our model on Ne RF-Synthetic (Mildenhall et al., 2020) and LLFF (Mildenhall et al., 2019).
Dataset Splits No The paper mentions sampling training and testing images, but does not explicitly state a validation split, proportions, or specific methodology for creating a validation set.
Hardware Specification Yes We train each model for 70k iterations for 6 hours in total on a two Nvidia 3090Ti GPU.
Software Dependencies No The paper mentions using 'Py Torch framework' and 'ADAM optimizer' but does not specify their version numbers or other software dependencies with versions.
Experiment Setup Yes The learning rate is first linearly warmed up from 0 to 5 10 4 for the first 5k iterations, and then controlled by the cosine decay schedule to the minimum learning rate of 5 10 6. We clip gradients by value at 0.1 and then by norm at 0.1. We train each model for 70k iterations for 6 hours in total on a two Nvidia 3090Ti GPU.