Neural Relightable Participating Media Rendering
Authors: Quan Zheng, Gurprit Singh, Hans-peter Seidel
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on multiple scenes show that our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods, and it can generalize to deal with solid objects with opaque surfaces as well. |
| Researcher Affiliation | Academia | 1Max Planck Institute for Informatics, 66123 Saarbrücken, Germany 2Institute of Software, Chinese Academy of Sciences, 100190 Beijing, China |
| Pseudocode | No | The paper describes the method using prose and diagrams but does not contain a formal pseudocode block or algorithm listing. |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is available, nor does it provide a link to a code repository. |
| Open Datasets | No | We produce datasets from seven synthetic participating media scenes... No concrete access information (link, DOI, repository, or specific citation for public availability) is provided for these synthetic datasets. |
| Dataset Splits | Yes | Each dataset contains 180 images, from which we use 170 images for training and the remaining for validation. In addition, we prepare a test set with 30 images for each scene to test the trained models. |
| Hardware Specification | Yes | Our training with 200K iterations on a Nvidia Quadro RTX 8000 GPU takes one day |
| Software Dependencies | No | The paper mentions using Adam optimizer and Mitsuba renderer but does not provide specific version numbers for these or other key software components or libraries. |
| Experiment Setup | Yes | We set the maximum positional encoding frequency to 28 for coordinates p, 21 for directions d, and 22 for the 3D location of the point light. We train all neural networks together to minimize the loss (Eq. 9). We use the Adam [52] optimizer with its default hyperparameters and schedule the learning rate to decay from 1 10 4 to 1 10 5 over 200K iterations. For each iteration, we trace a batch of 1200 primary rays. Note we stop the gradients from the visibility loss to the property network and the feature network so that they do not compromise the learning to match the visibility network. |