Learning Parametric Sparse Models for Image Super-Resolution

Authors: Yongbo Li, Weisheng Dong, Xuemei Xie, GUANGMING Shi, Xin Li, Donglai Xu

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed SR method outperforms existing state-of-the-art methods in terms of both subjective and objective image qualities.
Researcher Affiliation Academia State Key Lab. of ISN, School of Electronic Engineering, Xidian University, China 1Key Lab. of IPIU (Chinese Ministry of Education), Xidian University, China 2Lane Dep. of CSEE, West Virginia University, USA 3Sch. of Sci. and Eng., Teesside University, UK
Pseudocode Yes Algorithm 1 Sparse codes learning algorithm and Algorithm 2 Image SR with Learned Sparse Representation
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes Three images sets, i.e., Set5[9], Set14[10] and BSD100[11], which consists of 5, 14 and 100 images respectively, are used as the test images.
Dataset Splits No The paper mentions using a 'training set of images' and 'test images' (Set5, Set14, BSD100), but it does not specify exact split percentages, absolute sample counts for each split, or detail a validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'imresize in matlab' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes Patches of size 7 7 are extracted from the feature images and HR images. The training patches are clustered into 1000 clusters. The other major parameters of the proposed SR method are set as: L = 12, T = 8, and J = 10.