Semantically Contrastive Learning for Low-Light Image Enhancement

Authors: Dong Liang, Ling Li, Mingqiang Wei, Shuo Yang, Liyan Zhang, Wenhan Yang, Yun Du, Huiyu Zhou1555-1563

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Training on readily available open data, extensive experiments demonstrate that our method surpasses the state-of-the-art LLE models over six independent cross-scenes datasets. Moreover, SCL-LLE s potential to benefit the downstream semantic segmentation under extremely dark conditions is discussed. ... Experiments Cross-dataset Peer Comparison For testing images, we use six publicly available independent cross-scenes low-light image datasets from other reported works, i.e., DICM (Lee, Lee, and Kim 2012), MEF (Ma, Zeng, and Wang 2015), LIME (Guo, Li, and Ling 2016), NPE (Wang et al. 2013), VV* and the Part2 of SICE (Cai, Gu, and Zhang 2018)). We compare the proposed method with six representative heterogeneous state-of-the-art methods... Ablation Study We perform ablation studies to demonstrate the effectiveness of each loss component.
Researcher Affiliation Collaboration 1Nanjing University of Aeronautics and Astronautics, MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Collaborative Innovation Center of Novel Software Technology and Industrialization 2vivo Mobile Communication 3Nanyang Technological University 4University of Leicester
Pseudocode No The paper does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Source Code: https://github.com/Ling LIx/SCL-LLE.
Open Datasets Yes From the respective of ease to use, we use readily accessible training data the Cityscapes (Cordts et al. 2016) dataset, to provide input images with semantic ground truths, and the Part1 of SICE dataset (Cai, Gu, and Zhang 2018), to provide unpaired negative/positive samples.
Dataset Splits Yes There are 2975 images for training, 500 for validation, and 1525 for testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. It only mentions the type of sensor used for data collection in the Cityscapes dataset.
Software Dependencies No The paper mentions software components like 'Deep Labv3+', 'VGG-16', 'Adam optimizer' but does not specify their version numbers or the versions of any underlying programming languages or libraries (e.g., Python, PyTorch).
Experiment Setup Yes We resize the training images to the size of 384 384. As for the numerical parameters, we set the maximum epoch as 50 and the batch size as 2. The model is optimized using the Adam optimizer with a fixed learning rate of 1e 4. ... we set them to 0.04 and 0.3 respectively in our experiments. ... We set λ to 200 in our experiments for the best outcome.