End-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images

Authors: Junghee Kim, Siyeong Lee, Suk-Ju Kang1780-1788

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that the proposed network outperforms the state-of-the-art quantitative and qualitative results in terms of both the exposure transfer tasks and the whole HDRI process. Experimental Results Datasets We trained our model on the VDS dataset (Lee, An, and Kang 2018a), where the training set has 48 multi-exposure stacks, and the testing set has 48 stacks. In addition, we evaluated our model on the stacks of the HDR-Eye dataset (Lee, An, and Kang 2018a; Liu et al. 2020; Nemoto et al. 2015), which is widely used for the performance evaluation.
Researcher Affiliation Collaboration Jung Hee Kim 1, Siyeong Lee 2, Suk-Ju Kang1 1 Department of Electronic Engineering Sogang University, Seoul, Korea 2 NAVER LABS, Bundang, Korea
Pseudocode No The paper describes methods in text and uses figures (e.g., Fig. 1, Fig. 2, Fig. 3) to illustrate structures, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide a link to a code repository for the described methodology.
Open Datasets Yes We trained our model on the VDS dataset (Lee, An, and Kang 2018a), where the training set has 48 multi-exposure stacks, and the testing set has 48 stacks. In addition, we evaluated our model on the stacks of the HDR-Eye dataset (Lee, An, and Kang 2018a; Liu et al. 2020; Nemoto et al. 2015), which is widely used for the performance evaluation. To perform evaluations on more real image dataset, we conducted experiments with the RAISE dataset (Dang Nguyen et al. 2015).
Dataset Splits Yes We trained our model on the VDS dataset (Lee, An, and Kang 2018a), where the training set has 48 multi-exposure stacks, and the testing set has 48 stacks.
Hardware Specification Yes Our model was trained on two GTX Titan X GPUs for four days to reach 80k iterations.
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'Conv GRU', 'U-Net', 'Swish activation', 'Canny edge detector', 'VGG-19 network', and 'MATLAB HDR Toolbox' but does not specify version numbers for these dependencies.
Experiment Setup Yes For training the recurrent-up and recurrent-down networks, we chose the gradient centralized Adam optimizer (Yong et al. 2020) with the learning rate of 1e 4. The momentum parameters of β1 and β2 were set to 0.5 and 0.999, respectively. We trained our model with a batch size of 1. Our model was trained on two GTX Titan X GPUs for four days to reach 80k iterations. We set the hyperparameters λ1 = λ3 = λ4 = λ5 = 1 and λ2 = λ6 = 0.1 in our experiments to stably train the networks.