Enhance Image as You Like with Unpaired Learning

Authors: Xiaopeng Sun, Muxingzi Li, Tianyu He, Lubin Fan

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets, while being 6 to 10 times lighter than state-of-the-art generative adversarial networks (GANs) approaches. and Table 1 shows a quantitative comparison between our method and the other baselines on the PSNR, SSIM and the NIQE [Mittal et al., 2012] metrics. and 4.2 Ablation Study We demonstrate the effectiveness of our choice of losses and the CCM block via ablation studies.
Researcher Affiliation Collaboration Xiaopeng Sun1 , Muxingzi Li2 , Tianyu He2 and Lubin Fan2 1Xidian University 2Alibaba Group xpsun@stu.xidian.edu.cn, {muxingzi.lmxz,timhe.hty}@alibaba-inc.com, lubinfan@gmail.com
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Our code can be found at https://github.com/sxpro/Image Enhance c GAN.
Open Datasets Yes We assemble images from three different datasets [Wei et al., 2018; Bychkovsky et al., 2011; Loh and Chan, 2019] and ignore the paired information in each individual dataset if there is any, which leads to a larger and more diverse dataset that consists of 983 low-light and 5576 normal-light images. We follow the same practice of previous work[Yang et al., 2020] to use part of the LOL dataset[Wei et al., 2018] for training, and leaving the other part for testing. and Table 1: PSNR( ) \ SSIM( ) \ NIQE( ) metrics on the paired test set of datasets LOL [Wei et al., 2018] starting from image #690, and Five K [Bychkovsky et al., 2011].
Dataset Splits No We follow the same practice of previous work[Yang et al., 2020] to use part of the LOL dataset[Wei et al., 2018] for training, and leaving the other part for testing. No explicit mention of validation split percentages or counts.
Hardware Specification No We implement our network with Py Torch on a Tesla GPU. This does not specify the exact model of the Tesla GPU.
Software Dependencies No We implement our network with Py Torch on a Tesla GPU. No version numbers for PyTorch or other software dependencies are provided.
Experiment Setup Yes We adopt Adam optimizer with default parameters and with learning rate set to 5 10 5. We set the loss weight λ in Eq. (6) to 0.9, and α in Eq. (7) to 0.05 in all the tests.