Learning Image Demoiréing from Unpaired Real Data

Authors: Yunshan Zhong, Yuyao Zhou, Yuxin Zhang, Fei Chao, Rongrong Ji

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on the commonly-used FHDMi and UHDM datasets. Results manifest that our Un De M performs better than existing methods when using existing demoir eing models such as MBCNN and ESDNet-L.
Researcher Affiliation Academia Yunshan Zhong1,2, Yuyao Zhou2,3, Yuxin Zhang2,3, Fei Chao2,3, Rongrong Ji1,2,3,4* 1Institute of Artificial Intelligence, Xiamen University. 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University. 3Department of Artificial Intelligence, School of Informatics, Xiamen University. 4Peng Cheng Laboratory. {zhongyunshan,yuyaozhou,yuxinzhang}@stu.xmu.edu.cn {fchao, rrji}@xmu.edu.cn
Pseudocode No Details of training algorithms are listed in the supplementary materials.
Open Source Code Yes Code: https://github. com/zysxmu/Un De M.
Open Datasets Yes Public demoir eing datasets used in this paper include the FHDMi (He et al. 2020) dataset and UHDM (Yu et al. 2022) dataset.
Dataset Splits No The FHDMi dataset consists of 9,981 image pairs for training and 2,019 image pairs for testing with 1920 1080 resolution. The UHDM dataset contains 5,000 image pairs with 4K resolution in total, of which 4,500 are used for training and 500 for testing.
Hardware Specification Yes All experiments are run on NVIDIA A100 GPUs.
Software Dependencies No The paper states 'We implement our Un De M using the Pytorch framework (Paszke et al. 2019)' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes The moir e synthesis network is trained using the Adam optimizer (Kingma and Ba 2014), where the first momentum and second momentum are set to 0.9 and 0.999, respectively. We use 100 epochs for training with a batch size of 4 and an initial learning rate of 2 10 4, which is linearly decayed to 0 in the last 50 epochs. [...] All networks are initialized using a Gaussian distribution with a mean of 0 and a standard deviation of 0.02. The γ1, γ2, γ3, and γ4 for adaptive denoise are empirically set to 50, 40, 30, and 20, respectively.