Efficient Deep Image Denoising via Class Specific Convolution

Authors: Lu Xu, Jiawei Zhang, Xuanye Cheng, Feng Zhang, Xing Wei, Jimmy Ren3039-3046

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Quantitative and qualitative evaluations on public datasets demonstrate that the proposed method can reduce the computational costs without sacrificing the performance compared to state-of-the-art algorithms.
Researcher Affiliation Collaboration 1 The Chinese University of Hong Kong 2 Sense Time Research 3 School of Software Engineering, Xi an Jiaotong University 4 Qing Yuan Research Institute, Shanghai Jiao Tong University
Pseudocode No The paper describes the methods in text and uses equations, but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes https://github.com/XenonLamb/CSConvNet
Open Datasets Yes Our training set consists of 400 images from BSD500 (Martin et al. 2001), 800 images from DIV2K (Agustsson and Timofte 2017), 4744 images from Waterloo (Ma et al. 2016), and 5000 images from 5K (Bychkovsky et al. 2011).
Dataset Splits No The paper specifies training and evaluation datasets but does not explicitly provide details about a distinct validation set or the specific train/validation/test splits (e.g., percentages or sample counts) for reproducibility beyond listing the datasets used for training and evaluation.
Hardware Specification No The paper does not specify the exact hardware used for experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper states that experiments are implemented with 'Py Torch library' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes When training, we choose batch size as 4 and patch size as 96. Data augmentation including random flip and 0 , 90 , 180 , 270 rotation are adopted when generating the training patches. ADAM optimizer (Kingma and Ba 2014) is used in training with β1 = 0.9, β2 = 0.999, ϵ = 1 10 8. The initial learning rate is set to 10 4, and decays by factor 0.5 after every 20 epochs. Both PCN and CSDN are trained for 100 epochs.