Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Low-Light Image Enhancement via Generative Perceptual Priors

Authors: Han Zhou, Wei Dong, Xiaohong Liu, Yulun Zhang, Guangtao Zhai, Jun Chen

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our model outperforms current SOTA methods on paired LL datasets and exhibits superior generalization on real-world data.
Researcher Affiliation Academia 1Mc Master University 2Shanghai Jiao Tong University EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and diagrams (Figures 2 and 3) but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/Low Level AI/GPP-LLIE
Open Datasets Yes We conduct experiments on various low light datasets including LOL (Wei et al. 2018), LOL-v2-real, and LOL-v2-synthetic (Yang et al. 2021). ... Moreover, we also test the generalization of our method on several real-world datasets without ground truth images including MEF (Ma, Zeng, and Wang 2015), LIME (Guo 2016), DICM (Lee, Lee, and Kim 2013), and NPE (Wang et al. 2013).
Dataset Splits Yes Specifically, we train our model using 485, 689, and 900 LL-NL pairs on LOL, LOL-v2-real, and LOL-v2synthetic datasets, and other 15, 100, and 100 images are used for evaluation.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or detailed computer specifications used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Our model is trained using the Adam W optimizer with the total training iterations of 1.5M for all datasets and the learning rate is set to 10 4. Each training input is cropped into 320 320, and the batch size is set to 16. We use horizontal flips and rotations for data augmentation. For the diffusion process, the total timesteps for training is set to 1, 000 during the training, and we use 25 steps to accelerate sampling process for the inference.