Aligned Structured Sparsity Learning for Efficient Image Super-Resolution

Authors: Yulun Zhang, Huan Wang, Can Qin, Yun Fu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive comparisons with lightweight SR networks. Our ASSLN achieves superior performance gains over recent methods quantitatively and visually. [...] 4 Experimental Results
Researcher Affiliation Academia 1Department of ECE, Northeastern University 2Khoury College of Computer Science, Northeastern University
Pseudocode No The paper states, 'We provide the detailed algorithm in the supplementary material.', implying it is not in the main text.
Open Source Code Yes Our code and trained models are available at https://github.com/MingSun-Tse/ASSL.
Open Datasets Yes Following most recent works [56, 38, 63, 19], we use DIV2K [56] and Flickr2K [38] as training data.
Dataset Splits No The paper mentions training and testing datasets ('we use DIV2K [56] and Flickr2K [38] as training data' and 'For testing, we use five standard benchmark datasets: Set5 [2], Set14 [62], B100 [43], Urban100 [26], and Manga109 [44]'), but does not explicitly provide details about a separate validation split or how the data is partitioned for validation.
Hardware Specification Yes We use Py Torch [49] to implement our models with a Tesla V100 GPU.
Software Dependencies No The paper mentions 'Py Torch [49]' but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes Each training batch consists of 16 LR color patches, whose size is 48 48. Our ASSLN model is trained by ADAM optimizer [32] with β1=0.9, β2=0.999, and ϵ=10 8. We set the initial learning rate as 10 4 and then decrease it to half every 2 105 iterations.