Multi-Scale Adaptive Network for Single Image Denoising
Authors: Yuanbiao Gou, Peng Hu, Jiancheng Lv, Joey Tianyi Zhou, Xi Peng
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both three real and six synthetic noisy image datasets show the superiority of MSANet compared with 12 methods. |
| Researcher Affiliation | Academia | Yuanbiao Gou1, Peng Hu1, Jiancheng Lv1, Joey Tianyi Zhou2, Xi Peng1 1College of Computer Science, Sichuan University, China. 2Institute of High Performance Computing, A*STAR, Singapore. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code could be accessed from https://github.com/XLearning-SCU/2022-NeurIPS-MSANet. |
| Open Datasets | Yes | For the evaluations on real noise, we employ the SIDD [1], RENOIR [3], Poly [42] datasets for training, and use SIDD Validation, Nam [28] and Dn D [30] datasets for testing. For synthetic noise, we train MSANet on DIV2K [2] dataset, which contains 800 images of 2K resolution, by adding Additive White Gaussian Noise (AWGN) with the noise levels of 30, 50, and 70. We use color Mc Master [52] (CMc Master), color Kodak24 (CKodak24), color BSD68 [24] (CBSD68) for testing color image denoising, and grayscale Mc Master (GMc Master), grayscale Kodak24 (GKodak24), grayscale BSD68 (GBSD68) for testing grayscale image denoising. |
| Dataset Splits | No | The paper mentions using "SIDD Validation" as a test dataset. It does not explicitly specify a separate validation split with percentages or counts for its own model training. |
| Hardware Specification | Yes | We implement MSANet in Pytorch [29] and carry out all experiments on Ubuntu 20.04 with Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions "Pytorch [29]" and "Ubuntu 20.04" but does not provide specific version numbers for software libraries or other dependencies required for reproducibility. |
| Experiment Setup | Yes | In our implementations, we use four scales of features with channels of 32, 64, 128 and 256. Moreover, we train MSANet 100 epochs via L1 loss for real noise and 300 epochs via L2 loss for synthetic noise. Both real and synthetic noise training are with the batch size of 16 and the patch size of 128. To optimize MSANet, the Adam [16] optimizer is used, and the learning rate is initially set to 1e-4 and decays to zero via the cosine annealing strategy [23]. During the training, we randomly crop, flip and rotate the patches for data augmentation. |