RobIR: Robust Inverse Rendering for High-Illumination Scenes

Authors: Ziyi Yang, Chenyanzhen , Xinyu Gao, YazhenYuan , Wu Yu, Xiaowei Zhou, Xiaogang Jin

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present the experimental evaluation of our methods. To assess the effectiveness of our approach, we collect synthetic and real-world datasets from Ne RF and Neu S without any post-processing. In addition, we use Blender to render our own datasets to further demonstrate the superiority of our methods in high-illumination scenes. ... Tab. 1 shows the accuracy of the albedo, roughness, relighting, and environment map averaged over synthetic scenes. ... We can observe that our method achieve the best results in all inverse rendering tasks.
Researcher Affiliation Collaboration Ziyi Yang1 Yanzhen Chen1 Xinyu Gao1 Yazhen Yuan2 Yu Wu2 Xiaowei Zhou1 Xiaogang Jin1 1State Key Lab of CAD&CG, Zhejiang University 2Tencent
Pseudocode No The paper describes its methods in prose and with diagrams (e.g., Figure 1), but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/ingra14m/Rob IR.
Open Datasets Yes To assess the effectiveness of our approach, we collect synthetic and real-world datasets from Ne RF and Neu S without any post-processing. In addition, we use Blender to render our own datasets to further demonstrate the superiority of our methods in high-illumination scenes. It should be noted that unlike previous methods [17, 55] that used a hotdog scene with reduced illumination, we use the original hotdog from Ne RF [32] without reduced illumination.
Dataset Splits No The paper mentions 'batch size of 1024, with 200k iterations for the Neu S training' and discusses training and test results on synthetic and real-world datasets, but it does not specify explicit training/validation/test splits (e.g., percentages or counts) or reference predefined splits with citations for reproducibility.
Hardware Specification Yes All tests were conducted on a single Tesla V100 GPU with 32GB memory.
Software Dependencies No The paper states 'The model was implemented in Py Torch and optimized with the Adam optimizer', but it does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Our model hyperparameters consisted of a batch size of 1024, with 200k iterations for the Neu S training. The model was implemented in Py Torch and optimized with the Adam optimizer at a learning rate of 5e 4.