Low-Light Image Enhancement Network Based on Multi-Scale Feature Complementation

Authors: Yong Yang, Wenzhi Xu, Shuying Huang, Weiguo Wan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets show that the proposed method outperforms some state-of-the-art methods subjectively and objectively.
Researcher Affiliation Academia 1School of Computer Science and Technology, Tiangong University, Tianjin, China 2School of Information Technology, Jiangxi University of Finance and Economics, Nanchang, China 3School of Software, Tiangong University, Tianjin, China 4School of Software and Internet of Things Engineering, Jiangxi University of Finance and Economics, Nanchang, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository.
Open Datasets Yes In experiments, the LOL (Wei et al. 2018) dataset is used to train the proposed network... To test the generalization ability of the network, the trained network is also tested on another dataset VE-LOL-L (Liu et al. 2021).
Dataset Splits No The paper states that the LOL dataset consists of 500 image pairs, with 485 used for training and 15 for testing. It does not provide specific details for a validation split (percentages, sample counts, or explicit mention of a validation set).
Hardware Specification No The paper does not provide any specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes During training, a two-stage training strategy is adopted to train the proposed network. The first stage sets the number of iterations to 250 and uses only the content loss, perceptual loss, structure loss and saturation loss to train the network. After obtaining better enhanced results in the first stage, the second stage adds the adversarial loss to further optimize the network. Through extensive experiments, the weights 1 , 2 , 3 , 4 , and 5 in Eq. (8) were set to 1, 0.25, 0.25, 0.1, and 1, respectively. The weights 1 , 2 , 3 , and 4 in Eq. (9) were set to 2, 1, 1, and 1, respectively. In addition, the initial learning rate is set to 0.0002, and when the number of iterations reaches 150, the learning rate is adjusted to half of the initial learning rate.