Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition
Authors: Mark Boss, Varun Jampani, Raphael Braun, Ce Liu, Jonathan Barron, Hendrik PA Lensch
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an empirical analysis on synthetic datasets, along with qualitative and quantitative visual results on real-world datasets. We demonstrate that our decomposition network using our neural-PIL can estimate more accurate shape and material properties compared to prior art. |
| Researcher Affiliation | Collaboration | Mark Boss University of Tübingen Varun Jampani Google Research Raphael Braun University of Tübingen Ce Liu Microsoft Azure AI Jonathan T. Barron Google Research Hendrik P. A. Lensch University of Tübingen |
| Pseudocode | No | The paper includes architectural diagrams (Figure 3, Figure 4) but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Project page: https://markboss.me/publication/2021-neural-pil/ |
| Open Datasets | Yes | To enable the comparisons with Ne RD [11], we use the publicly released dataset used in [11] which provides 3 synthetic (Chair, Globe, Car) and 4 real-world scenes. In addition, we also present view synthesis results on datasets (Ship, Chair, Lego) used in Ne RF [44]. |
| Dataset Splits | No | The paper mentions using datasets for training and evaluation but does not specify explicit train/validation/test splits by percentages or counts. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications). |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper mentions training details can be found in the supplement but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or system-level training settings in the main text. |