Degrade Is Upgrade: Learning Degradation for Low-Light Image Enhancement

Authors: Kui Jiang, Zhongyuan Wang, Zheng Wang, Chen Chen, Peng Yi, Tao Lu, Chia-Wen Lin1078-1086

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both the enhancement task and joint detection task have verified the effectiveness and efficiency of our proposed method, surpassing the SOTA by 0.70d B on average and 3.18% in m AP, respectively.
Researcher Affiliation Academia Kui Jiang1, Zhongyuan Wang1 , Zheng Wang1, Chen Chen2, Peng Yi1, Tao Lu3, Chia-Wen Lin4 1 School of Computer Science, Wuhan University, Wuhan, China 2 University of Central Florida 3 Wuhan Institute of Technology 4 National Tsing Hua University
Pseudocode No The paper describes the model architecture and procedures but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The code will be available soon.
Open Datasets Yes Following the setting in Retinex Net (Wei et al. 2018), we use the LOL dataset for training, which contains 500 low/normal-light image pairs (480 for training and 20 for evaluation). ... we also introduce two novel low/normal-light datasets (COCO24700 for training and COCO1000 for evaluation) based on COCO (Caesar, Uijlings, and Ferrari 2018) dataset
Dataset Splits Yes Following the setting in Retinex Net (Wei et al. 2018), we use the LOL dataset for training, which contains 500 low/normal-light image pairs (480 for training and 20 for evaluation).
Hardware Specification Yes We use the Adam optimizer with a batch size of 16 for training DRGN on a single NVIDIA Titan Xp GPU.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes In our baseline, the pyramid layer is empirically set to 3, corresponding to the number of RCABs and RCAB depths of [2, 3, 4] and [3, 3, 3], respectively, in the residual group. The training images are cropped to non-overlapping 96 96 patches to obtain sample pairs. Standard augmentation strategies, e.g., scaling and horizontal flipping are applied. We use the Adam optimizer with a batch size of 16 for training DRGN on a single NVIDIA Titan Xp GPU. The learning rate is initialized to 5 10 4 and then attenuated by 0.9 every 6,000 steps. After 60 epochs on training datasets, we obtain the optimal solution with the above settings. Specifically, we train De G for the first 20 epochs and then optimize Re G for 40 epochs.