Pyramid Diffusion Models for Low-light Image Enhancement

Authors: Dewei Zhou, Zongxin Yang, Yi Yang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on popular benchmarks show that Py Diff achieves superior performance and efficiency. Moreover, Py Diff can generalize well to unseen noise and illumination distributions. Code and supplementary materials are available at https://github.com/limuloo/Py DIff.git.
Researcher Affiliation Academia Dewei Zhou , Zongxin Yang , Yi Yang Re LER, CCAI, Zhejiang University {zdw1999, yangzongxin, yangyics}@zju.edu.cn
Pseudocode Yes Algorithm 1 Training" and "Algorithm 2 Sampling" are presented on page 5.
Open Source Code Yes Code and supplementary materials are available at https://github.com/limuloo/Py DIff.git.
Open Datasets Yes We conduct experiments on LOL [Wei et al., 2018] and LOLV2 [Yang et al., 2021] datasets.
Dataset Splits No The paper mentions a test set for LOLV2 REAL PART but does not specify a validation dataset split for hyperparameter tuning or early stopping.
Hardware Specification Yes We complete training on two NVIDIA Ge Force RTX 3090s.
Software Dependencies No The paper mentions using the Adam optimizer but does not provide specific version numbers for software dependencies such as deep learning frameworks or libraries.
Experiment Setup Yes We set the patch size to 192 × 288 and the batch size to 16. We use the Adam optimizer with an initial learning rate of 1 × 10−4 for 320k iterations and halve the learning rate at 50k, 75k, 100k, 150k, and 200k. The optimizer does not use weight decay.