EMEF: Ensemble Multi-Exposure Image Fusion
Authors: Renshuai Liu, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng, Xuan Cheng
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiment, we construct EMEF from four state-of-the-art MEF methods and then make comparisons with the individuals and several other competitive methods on the latest released MEF benchmark dataset. The promising experimental results demonstrate that our ensemble framework can get the best of all worlds . |
| Researcher Affiliation | Academia | Renshuai Liu, Chengyang Li, Haitao Cao, Yinglin Zheng, Ming Zeng, Xuan Cheng* School of Informatics, Xiamen University, Xiamen 361005, China {medalwill, chengyanglee, chtao, zhengyinglin}@stu.xmu.edu.cn, {zengming, chengxuan}@xmu.edu.cn |
| Pseudocode | Yes | Algorithm 1: Search for the optimal style code c Rn |
| Open Source Code | Yes | The code is available at https://github.com/medalwill/EMEF. |
| Open Datasets | Yes | We train EMEF with the SICE (Cai, Gu, and Zhang 2018) dataset and evaluate it in MEFB (Zhang 2021). |
| Dataset Splits | No | The paper specifies training data and evaluation datasets but does not explicitly provide details for a validation split, nor exact percentages or counts for distinct training, validation, and test partitions for its model training. |
| Hardware Specification | Yes | All experiments are conducted with two Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions implementing EMEF with "Pytorch" but does not specify a version number for PyTorch or any other software dependency. |
| Experiment Setup | Yes | In our experiments, λ is set to 0.002. The network architecture of the generator follows the image-to-image translation network (Isola et al. 2017). The image size of both input and output are 512 512. In the imitator network pre-training, the batch size is set to 1 and the network is trained with an Adam optimizer for 100 epochs. In the first 50 epochs, the learning rate is set to 2e 4, and then decays linearly for the rest. |