ALL-E: Aesthetics-guided Low-light Image Enhancement

Authors: Ling Li, Dong Liang, Yuanhang Gao, Sheng-Jun Huang, Songcan Chen

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that integrating aesthetic assessment improves both subjective experience and objective evaluation. Our results on various benchmarks demonstrate the superiority of ALL-E over state-of-the-art methods. Source code: https://dongl-group. github.io/project pages/ALLE.html
Researcher Affiliation Academia Ling Li , Dong Liang , Yuanhang Gao , Sheng-Jun Huang , Songcan Chen College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence {liling, liangdong, gaoyuanhang, huangsj, s.chen}@nuaa.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Source code: https://dongl-group. github.io/project pages/ALLE.html
Open Datasets Yes We use 485 low-light images of the LOL dataset [Wei et al., 2018] to train the proposed framework. ... trained on the AVA dataset [Murray et al., 2012].
Dataset Splits No The paper mentions using the LOL dataset for training and testing, but it does not specify explicit training/validation/test splits with percentages or sample counts in the main text.
Hardware Specification Yes Our framework is implemented in Py Torch on an NVIDIA 1080Ti GPU.
Software Dependencies No Our framework is implemented in Py Torch on an NVIDIA 1080Ti GPU. The model is optimized using the Adam optimizer with a learning rate of 1e 4. The paper mentions PyTorch but does not specify a version number.
Experiment Setup Yes The maximum number of training epochs was set to 1000, with a batch size of 2. We train our framework end-to-end while fixing the weights of the aesthetic oracle network. Our framework is implemented in Py Torch on an NVIDIA 1080Ti GPU. The model is optimized using the Adam optimizer with a learning rate of 1e 4. The total number of steps in the training phase is set to n = 6.