Rendering-Aware HDR Environment Map Prediction from a Single Image
Authors: Jun-Peng Xu, Chenyu Zuo, Fang-Lue Zhang, Miao Wang2857-2865
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative experiments demonstrate that our approach achieves lower relighting errors for virtual object insertion and is preferred by users compared to state-of-the-art methods. |
| Researcher Affiliation | Academia | Jun-Peng Xu1, Chen-Yu Zuo2, Fang-Lue Zhang3, Miao Wang1,4* 1State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University, China 2Xidian University, China 3Victoria University of Wellington, New Zealand 4Peng Cheng Laboratory, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository. |
| Open Datasets | Yes | The Laval Indoor HDR Dataset (Gardner et al. 2017) is used for evaluation. |
| Dataset Splits | Yes | We crop eight images from each panorama as input, and use the same warping operation in (Gardner et al. 2017) to get ground truth environment map, thus producing 19,557 training pairs. We randomly select 300 images as the test set, and the rest are used for training. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for ancillary software or libraries used in the experiments. |
| Experiment Setup | No | The paper describes the model architecture and losses but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations. |