Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning

Authors: Yulun Zhang, Huan Wang, Can Qin, Yun Fu

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide extensive comparisons with both lightweight and larger image SR networks. Our SRPN-Lite and SRPN perform favorably against other recent works. ... Data and Evaluation. We use DIV2K dataset (Timofte et al., 2017) and Flickr2K Lim et al. (2017) as training data, following most recent works (Timofte et al., 2017; Lim et al., 2017; Zhang et al., 2018a; Haris et al., 2018). For testing, we use five standard benchmark datasets: Set5 (Bevilacqua et al., 2012), Set14 (Zeyde et al., 2010), B100 (Martin et al., 2001), Urban100 (Huang et al., 2015), and Manga109 (Matsui et al., 2017). The SR results are evaluated with PSNR and SSIM (Wang et al., 2004) on the Y channel in YCb Cr space. ... 4.2 ABLATION STUDY
Researcher Affiliation Academia Yulun Zhang1, Huan Wang1, , Can Qin1 Yun Fu1 1Northeastern University, USA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/mingsun-tse/SRP.
Open Datasets Yes We use DIV2K dataset (Timofte et al., 2017) and Flickr2K Lim et al. (2017) as training data, following most recent works (Timofte et al., 2017; Lim et al., 2017; Zhang et al., 2018a; Haris et al., 2018).
Dataset Splits No The paper mentions using DIV2K and Flickr2K as 'training data' and other datasets for 'testing', but it does not specify any explicit validation splits or methodology for a validation set.
Hardware Specification Yes We use Py Torch (Paszke et al., 2017) to implement our models with a Tesla V100 GPU .
Software Dependencies No The paper mentions 'Py Torch' but does not provide a specific version number for this or any other software dependency.
Experiment Setup Yes Adam optimizer (Kingma & Ba, 2014) is adopted for training with β1=0.9, β2=0.999, and ϵ=10 8. Initial learning rate is set to 10 4 and then decayed by factor 0.5 every 2 105 iterations.