Improving Dynamic HDR Imaging with Fusion Transformer

Authors: Rufeng Chen, Bolun Zheng, Hua Zhang, Quan Chen, Chenggang Yan, Gregory Slabaugh, Shanxin Yuan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments and ablation studies, we demonstrate that our model outperforms the stateof-the-art by large margins on several popular public datasets.
Researcher Affiliation Academia 1Hangzhou Dianzi University, Xiasha No.2 Street, Hangzhou, 310018, Zhejiang, China 2Queen Mary University of London, London, UK {chenrufeng, blzheng, zhangh, chenquan, cgyan}@{hdu.edu.cn} {g.slabaugh, shanxin.yuan}@{qmul.ac.uk}
Pseudocode No The paper does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Source code for HFT is available at https://github.com/Chenrf1121/HFT
Open Datasets Yes Kalantari s dataset is used as the basic training data set. All models are trained on this dataset.1 https://cseweb.ucsd.edu/ viscomp/projects/SIG17HDR/ In addition, we performed supplementary experiments using the Prabhakar s dataset (Prabhakar et al. 2019) to prove the generalization of the proposed model.2 https://val.cds.iisc.in/HDR/ICCP19/, MIT License
Dataset Splits Yes LDR images and corresponding images were split into patches of 128 128 size for training, however validation and test images were full resolution. ... During training, we measured the validation set at using PSNR-ยต.
Hardware Specification Yes We implemented our HFT using Pytorch on single NVIDIA RTX3090 GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes A 64channel, 3 3 convolution kernel is used in the Conv layer. We use an Adam optimizer, with the initial learning rate set to 10 4. LDR images and corresponding images were split into patches of 128 128 size for training... If the model performance was not improved after five epochs, the learning rate is halved. When the learning rate is less than 10 6, the training ends.