IllumiNeRF: 3D Relighting Without Inverse Rendering

Authors: Xiaoming Zhao, Pratul Srinivasan, Dor Verbin, Keunhong Park, Ricardo Martin Brualla, Philipp Henzler

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks. Please see our project page at illuminerf.github.io. ... We evaluate our method on two datasets: Tenso IR [23], a synthetic benchmark, and Stanford-ORB [27], a real-world benchmark.
Researcher Affiliation Collaboration Xiaoming Zhao1,3 Pratul P. Srinivasan2 Dor Verbin2 Keunhong Park1 Ricardo Martin-Brualla1 Philipp Henzler1 1Google Research 2Google DeepMind 3University of Illinois Urbana-Champaign
Pseudocode No The paper describes its method in prose and through diagrams, but does not include any formal pseudocode blocks or algorithms.
Open Source Code No We have not made the code or model weights available online, however, the Objaverse dataset is publicly available as well as the datasets required for the Stanford-ORB and Tenso IR benchmarks.
Open Datasets Yes Relighting Dataset We render objects from Objaverse [13] under varying poses and illuminations. ... We use Objaverse [13] as the synthetic dataset.
Dataset Splits No The paper describes training and evaluation splits for Tenso IR and Stanford-ORB datasets but does not explicitly mention a separate "validation" split with specific percentages or counts.
Hardware Specification Yes Ours 29.709 0.947 0.072 0.75 h + 1 h + 0.75 h 16 A100 40GB + a TPUv5 ... Ours (single GPU) 29.245 0.946 0.073 2 h + 1 h + 2 h a A100 40GB + a TPUv5
Software Dependencies No The paper mentions software like JAX, Stable Diffusion, Control Net, CLIP, Blender Cycles, Kubric, and Adam, but does not provide specific version numbers for any of these dependencies.
Experiment Setup Yes We decay our learning rate logarithmically from 5 10 3 to 5 10 4 over 25k training iterations with cosine-scheduled warmup in the first 500 steps. ... We fine-tune the base model for 150k steps using batch size of 512 examples and a learning rate of 10 4, which is linearly warmed up from 0 over the first 1k steps.