Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in Dynamic Scenes

Authors: Zhilu Zhang, Haoyu Wang, Shuai Liu, Xiaotao Wang, LEI LEI, Wangmeng Zuo

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on real-world images demonstrate our Self HDR achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
Researcher Affiliation Academia 1 Harbin Institute of Technology, Harbin, China
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Codes are available at https: //github.com/cszhilu1998/Self HDR.
Open Datasets Yes Experiments are mainly conducted on Kalantari et al. dataset (Kalantari et al., 2017), which are extensively utilized in previous works.
Dataset Splits No The dataset consists of 74 samples for training and 15 for testing. No explicit mention of a validation split or validation set size is provided.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using Adam optimizer and optical flow calculation by Liu et al. (Liu et al., 2009), but does not specify version numbers for any software libraries or dependencies (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes The training patches of size 128 128 are randomly cropped from the original images. The batch size is set to 16. Adam (Kingma & Ba, 2015) with β1 = 0.9 and β2 = 0.999 is taken to optimize models for 150 epochs. The learning rate is initially set to 1 10 4 for CNN-based networks and 2 10 4 for Transformer-based ones, and reduces by half every 50 epochs.