Hybrid-Supervised Dual-Search: Leveraging Automatic Learning for Loss-Free Multi-Exposure Image Fusion

Authors: Guanyao Wu, Hongming Fu, Jinyuan Liu, Long Ma, Xin Fan, Risheng Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We realize the state-of-the-art performance in comparison to various competitive schemes, yielding a 10.61% and 4.38% improvement in Visual Information Fidelity (VIF) for general and no-reference scenarios, respectively, while providing results with high contrast, rich details and colors. The code is available at https: //github.com/Rolling Plain/HSDS MEF. We conducted experiments on the SICE dataset (Cai, Gu, and Zhang 2018), and randomly selected 452 pairs of images as the training set and 113 as the testing set, and select another 31 images without reference (Ma et al. 2017) as part of the testing set.
Researcher Affiliation Academia Guanyao Wu, Hongming Fu, Jinyuan Liu, Long Ma, Xin Fan, Risheng Liu* School of Software Technology, Dalian University of Technology rollingplainko@gmail.com, {hm.fu, atlantis918}@hotmail.com, malone94319@gmail.com, {xin.fan, rsliu}@dlut.edu.cn
Pseudocode Yes Algorithm 1: Dual Search for Structure and Loss Function
Open Source Code Yes The code is available at https: //github.com/Rolling Plain/HSDS MEF.
Open Datasets Yes We conducted experiments on the SICE dataset (Cai, Gu, and Zhang 2018), and randomly selected 452 pairs of images as the training set and 113 as the testing set, and select another 31 images without reference (Ma et al. 2017) as part of the testing set. At the same time, we use Co Coval2017 as a natural light image set.
Dataset Splits Yes We conducted experiments on the SICE dataset (Cai, Gu, and Zhang 2018), and randomly selected 452 pairs of images as the training set and 113 as the testing set... For searching, half of the training set is randomly selected as the verification set...
Hardware Specification Yes The overall framework is implemented on Pytorch with an NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number for it or any other key software dependencies.
Experiment Setup Yes All images are randomly cropped to the size of 256 256 during the search and training process, and all parameters are updated using the Adam optimizer. For searching, half of the training set is randomly selected as the verification set, the batch size and epoch are set to 2 and 10, the learning rate of the network structure weight, the loss function weight and the network parameter are set to 2e-1, 3e-2 and 2e-4, respectively. For training of 60 epoches, the batch size and the learning rate are set to 10 and 1e-4.