Few-Shot Neural Radiance Fields under Unconstrained Illumination

Authors: SeokYeong Lee, JunYong Choi, Seungryong Kim, Ig-Jae Kim, Junghyun Cho

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We offer thorough experimental results for task evaluation, employing the newly created Ne RF Extreme benchmark the first in-the-wild benchmark for novel view synthesis under multiple viewing directions and varying illuminations. and In this section, we provide extensive comparisons with the baselines using our newly proposed datasets.
Researcher Affiliation Collaboration 1 Korea Institute of Science and Technology, Seoul 2 Korea University, Seoul 3 AI-Robotics, KIST School, University of Science and Technology 4 Yonsei-KIST Convergence Research Institute, Yonsei University
Pseudocode No The paper describes the method in text and figures, but does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code No The paper does not provide an explicit statement or link for the public release of the source code for the methodology described.
Open Datasets No To build a benchmark that fully reflects unconstrained environments, we collected multi-view images with varying light sources such as multiple light bulbs and the sun using the the mobile phone camera. We took 40 images per scene 30 images in the train set and 10 images in the test set. (For Ne RF Extreme, no access link). Phototourism F3(Frontal Facing Few-shot), a subset of Phototourism (Snavely, Seitz, and Szeliski 2006) dataset, is specifically curated for evaluating few-shot view synthesis under varying illumination. (While Phototourism is public, the specific F3 subset curated by them is not stated to be publicly available via a link or download).
Dataset Splits No We took 40 images per scene 30 images in the train set and 10 images in the test set. (This specifies train/test for Ne RF Extreme, but no explicit validation split is mentioned, nor are splits given for Phototourism F3. The question asks for training/test/validation dataset splits to reproduce the experiment which implies all splits.)
Hardware Specification Yes We train every scene for 70K using 4 NVIDIA A100 GPUs.
Software Dependencies No Our framework is based on the implementation of Reg Ne RF (Niemeyer et al. 2022). For FIDNet, the official code and model of IIDWW (Li and Snavely 2018) trained with Big Times dataset are used without fine-tuning. (No specific versions of programming languages, libraries, or solvers are provided).
Experiment Setup Yes An image size of 300 400 is used for the training, with Spatch = 32 32. We train every scene for 70K using 4 NVIDIA A100 GPUs.