DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer

Authors: Wenzheng Chen, Joey Litalien, Jun Gao, Zian Wang, Clement Fuji Tsang, Sameh Khamis, Or Litany, Sanja Fidler

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally demonstrate that our approach achieves superior material and lighting disentanglement on synthetic and real data compared to existing rasterization-based approaches and showcase several artistic applications including material editing and relighting.
Researcher Affiliation Collaboration NVIDIA1 University of Toronto2 Vector Institute3 Mc Gill University4
Pseudocode No The paper describes its methods using text and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The current implementation requires complex dependencies. We plan to release the code after refactoring.
Open Datasets Yes We chose 485 different car models from Turbo Squid2 to prepare data for metallic and glossy surfaces. We also collected 438 freely available high-dynamic range (HDR) environment maps from HDRI Haven3 to use as reference lighting... 2https://turbosquid.com. We obtain consent via agreement with Turbo Squid, following their license at https://blog.turbosquid.com/turbosquid-3d-model-license. 3https://hdrihaven.com. We follow the CCO license at https://hdrihaven.com/p/license.php.
Dataset Splits No The paper mentions training on synthetic data and testing on real imagery but does not provide specific numerical splits (e.g., percentages or sample counts) for training, validation, and test sets within the main text.
Hardware Specification No The paper states that hardware specifications are provided in the Supplementary Material, but no specific details (e.g., GPU/CPU models) are given in the main text.
Software Dependencies No The paper does not provide specific software dependencies or library versions (e.g., PyTorch version, CUDA version) in the main text.
Experiment Setup Yes We set αim = 20, αmsk = 5, αper = 0.5, αlap = 5, which we empirically found worked best. In particular, we predict the relative offset for all |M| = 642 vertices in a mesh and a 256 256 texture map, following the choices in [10]. For SG shading, we predict all parameters. While shape and texture are the same as MC shading, we adopt K = 32 for SG and predict two global parameters β and s for the specular BRDF.