Residual Non-local Attention Networks for Image Restoration

Authors: Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, Yun Fu

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that our method obtains comparable or better results compared with recently leading methods quantitatively and visually.
Researcher Affiliation Academia 1Department of ECE, Northeastern University, Boston, MA 02115, USA 2School of Computer Science and Technology, Huaqiao University, Xiamen 362100, China 3College of CIS, Northeastern University, Boston, MA 02115, USA
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statement about releasing code or a link to a code repository.
Open Datasets Yes We use 800 training images in DIV2K (Timofte et al., 2017; Agustsson & Timofte, 2017) to train all of our models. ... For training data, we use BSD400 (Martin et al., 2001) for color/gray-scale image denoising and demosaicing. We use 91 images in Yang et al. (2008) and 200 images in Martin et al. (2001) (denoted as SR291) for image compression artifacts reduction and super-resolution.
Dataset Splits No The paper specifies training and testing sets, but does not explicitly mention a separate validation set split or how it was used.
Hardware Specification Yes We use Py Torch (Paszke et al., 2017) to implement our models with a Titan Xp GPU.
Software Dependencies No The paper mentions using 'Py Torch (Paszke et al., 2017)' but does not specify a version number for PyTorch or any other software dependencies with their versions.
Experiment Setup Yes In each training batch, 16 low-quality (LQ) patches with the size of 48 48 are extracted as inputs. Our model is trained by ADAM optimizer (Kingma & Ba, 2014) with β1 = 0.9, β2 = 0.999, and ϵ = 10 8. The initial learning rate is set to 10 4 and then decreases to half every 2 105 iterations of back-propagation.