Low-Light Image Enhancement with Normalizing Flow
Authors: Yufei Wang, Renjie Wan, Wenhan Yang, Haoliang Li, Lap-Pui Chau, Alex Kot2604-2612
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors. We conduct extensively experiments on the popular benchmark datasets to show the effectiveness of our proposed framework. The ablation study and related analysis show the rationality of each module in our method. |
| Researcher Affiliation | Academia | 1 Rapid-Rich Object Search Lab, Nanyang Technological University, Singapore 2 Department of Computer Science, Hong Kong Baptist University, China 3 Department of Electrical Engineering, City University of Hong Kong, China |
| Pseudocode | No | The paper describes its components and methods in text and diagrams but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code is released at https://github.com/wyf0912/LLFlow. |
| Open Datasets | Yes | We first evaluate our method on the LOL datset (Wei et al. 2018) including 485 images for training and 15 images for testing. We further perform evaluation on VE-LOL (Liu et al. 2021a) dataset. It is a large-scale dataset including 2500 paired images with more diversified scenes and contents, thus is valuable for the cross-dataset evaluation. |
| Dataset Splits | No | The paper mentions training and testing sets, but does not explicitly state details about a separate validation dataset split for hyperparameter tuning. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions using "Adam as the optimizer" but does not provide specific version numbers for any software or libraries used (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | The patch size is set to 160 160 and the batch size is set to 16. We use Adam as the optimizer with a learning rate of 5 10 4 and without weight decay. For LOL dataset, we train the model for 3 104 iterations and the learning rate is decreased with a factor of 0.5 at 1.5 104, 2.25 104, 2.7 104, 2.85 104 iterations. For VE-LOL dataset, we train the model for 4 104 iterations and the learning rate is decreased with a factor of 0.5 at 2 104, 3 104, 3.6 104, 3.8 104 iterations. |