Neural Sparse Representation for Image Restoration
Authors: Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang, Yun Fu, Ding Liu, Thomas S. Huang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks, including image super-resolution, image denoising, and image compression artifacts removal. |
| Researcher Affiliation | Collaboration | Yuchen Fan, Jiahui Yu, Yiqun Mei, Yulun Zhang , Yun Fu , Ding Liu , Thomas S. Huang University of Illinois at Urbana-Champaign, Northeastern University, Byte Dance |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ychfan/nsr. |
| Open Datasets | Yes | For image super-resolution, models are trained with DIV2K [31] dataset...For image denoising, training set consists of Berkeley Segmentation Dataset (BSD) [33] 200 images from training split...For compression artifacts removal, training set consists of 91 images in [1] and 200 training images in [33]. |
| Dataset Splits | Yes | The DIV2K also comes with 100 validation images, which are used for ablation study. |
| Hardware Specification | Yes | The running time is the average of 100 times on quad-core Intel Core i5-2500 at 3.30GHz. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number for it or any other software dependency. |
| Experiment Setup | Yes | Online data augmentation includes random flipping and rotation during training. Training is based on randomly sampled image patches for 100 times per image and epoch. And total training epochs are 30. Models are optimized with L1 distance and ADAM optimizer. Initial learning rate is 0.001 and multiplied by 0.2 at 20 and 25 epochs. |