EEMEFN: Low-Light Image Enhancement via Edge-Enhanced Multi-Exposure Fusion Network

Authors: Minfeng Zhu, Pingbo Pan, Wei Chen, Yi Yang13106-13113

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the See-in-the-Dark dataset indicate that our EEMEFN approach achieves state-of-the-art performance.
Researcher Affiliation Collaboration Minfeng Zhu,1,2 Pingbo Pan,2,3 Wei Chen,1 Yi Yang2 1State Key Lab of CAD&CG, Zhejiang University 2The Re LER Lab, University of Technology Sydney, 3Baidu Research
Pseudocode No The paper describes the model architecture and processes but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate the EEMEFN model quantitatively and qualitatively on the See-in-the-Dark dataset (Chen et al. 2018). The See-in-the-Dark dataset consists of two image sets: Sony set and Fuji set.
Dataset Splits Yes We evaluate the EEMEFN model quantitatively and qualitatively on the See-in-the-Dark dataset (Chen et al. 2018). Following Chen et al. (2018), we preprocess all raw images by subtracting the black level.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No We implemented our method based on the Tensorflow framework and the Paddlepaddle framework. The paper mentions software frameworks but does not provide specific version numbers for them.
Experiment Setup Yes We train EEMEFN for 5000 epochs using ADAM (Kinga and Adam 2015) optimizer with an initial learning rate of 10 4, which is decreased to 5 10 5 after 2500 epochs and 10 5 after 3500 epochs.