Non-Local Recurrent Network for Image Restoration

Authors: Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, Thomas S. Huang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters.
Researcher Affiliation Academia 1University of Illinois at Urbana-Champaign 2Nanyang Technological University {dingliu2, bwen3, yuchenf4, t-huang1}@illinois.edu ccloy@ntu.edu.sg
Pseudocode No The paper includes architectural diagrams (Figure 1, 2, 3) but no formal pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Ding-Liu/NLRN.
Open Datasets Yes For image denoising... we choose as the training set the combination of 200 images from the train set and 200 images from the test set in the Berkeley Segmentation Dataset (BSD) [29]... For image SR... we follow [20, 35, 36] and use a training set of 291 images where 91 images are proposed in [46] and other 200 are from the BSD train set.
Dataset Splits Yes For image denoising... (1) ...train set and 200 images from the test set in the Berkeley Segmentation Dataset (BSD) [29]... (2) ...train set and 100 images from the val set in BSD, and test on Set14 and the BSD test set of 200 images... For image SR... We adopt four benchmark sets: Set5 [1], Set14 [48], BSD100 [29] and Urban100 [19] for testing...
Hardware Specification Yes Training a model takes about 3 days with a Titan Xp GPU.
Software Dependencies No The paper mentions using the Adam optimizer and concepts like batch normalization and ReLU activation, but it does not specify software versions for libraries, frameworks, or programming languages.
Experiment Setup Yes We use Adam optimizer to minimize the loss function. We set the initial learning rate as 1e-3 and reduce it by half five times during training. We use Xavier initialization for the weights. We clip the gradient at the norm of 0.5 to prevent the gradient explosion which is shown to empirically accelerate training convergence, and we adopt 16 as the minibatch size during training.