Make Lossy Compression Meaningful for Low-Light Images

Authors: Shilv Cai, Liqun Chen, Sheng Zhong, Luxin Yan, Jiahuan Zhou, Xu Zou

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our proposed joint solution achieves a significant improvement over different combinations of existing state-of-the-art sequential Compress before Enhance or Enhance before Compress solutions for low-light images...
Researcher Affiliation Academia 1Huazhong University of Science and Technology, Wuhan, Hubei 430074, China 2National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, Hubei 430074, China 3Wangxuan Institute of Computer Technology, Peking University, Beijing 100871, China {caishilv, chenliqun, zhongsheng, yanluxin, zoux}@hust.edu.cn, jiahuanzhou@pku.edu.cn
Pseudocode No The paper describes the model architecture and training strategy in detail but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The project is publicly available at: https://github.com/Cai Shilv/Joint-IC-LL.
Open Datasets Yes The Flicker 2W (Liu et al. 2020) is used in the pre-training and fine-tuning stages for all learning-based methods involved in the comparison. The low-light datasets that we use include SID (Chen et al. 2018), SDSD (Wang et al. 2021a), and SMID (Chen et al. 2019).
Dataset Splits No The paper mentions "We set up splitting for training and testing based on the previous work (Xu et al. 2022)" but does not provide specific validation dataset splits (e.g., percentages, sample counts, or explicit instructions for creating a validation set).
Hardware Specification Yes The networks are optimized using the Adam (Kingma and Ba 2015) optimizer with a mini-batch size of 8 for approximately 900000 iterations and trained on RTX 3090 GPUs.
Software Dependencies No Our implementation relies on Pytorch (Paszke et al. 2019) and the open-source Compress AI Py Torch library (B egaint et al. 2020). No specific version numbers for PyTorch or Compress AI PyTorch library are provided.
Experiment Setup Yes The networks are optimized using the Adam (Kingma and Ba 2015) optimizer with a mini-batch size of 8 for approximately 900000 iterations and trained on RTX 3090 GPUs. The initial learning rate is set as 10-4 and decayed by a factor of 0.5 at iterations 500000, 600000, 700000, and 850000. The number of pre-training iteration steps is 150000. We train our model under 8 qualities, where λd is selected from the set {0.0001, 0.0002, 0.0004, 0.0008, 0.0016, 0.0028, 0.0064, 0.012}.