Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning
Authors: Han Xu, Liang Haochen, Jiayi Ma
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Qualitative, quantitative, and ablation experiments validate the superiority and generalization of MEF-CL. Our code is publicly available at https://github.com/hanna-xu/MEF-CL. |
| Researcher Affiliation | Academia | Electronic Information School, Wuhan University, Wuhan 430072, China |
| Pseudocode | No | The paper describes network architectures and processes but does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/hanna-xu/MEF-CL. |
| Open Datasets | Yes | We conduct experiments on the SICE dataset (Cai, Gu, and Zhang 2018)1 and perform the verification on different scenes, including indoor and outdoor scenes.1https://github.com/csjcai/SICE |
| Dataset Splits | Yes | We randomly selected 479 image sequences as the training set. The remaining 80 image sequences are as the test set. |
| Hardware Specification | Yes | The experiments are performed on an NVIDIA Geforce GTX Titan V GPU. Traditional methods are tested on a laptop with 3.2 GHz AMD Ryzen 7 5800H CPU. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' but does not specify its version number or other software dependencies with versions. |
| Experiment Setup | Yes | The hyper-parameters are set as: λ1 = 10, λ2 = 20, τ = 0.01. The batch size is set to 20, the training epoch is 2, and the learning rate is 0.0001. We use the RMSProp optimizer for optimization. |