Spectrum-to-Kernel Translation for Accurate Blind Image Super-Resolution

Authors: Guangpin Tao, Xiaozhong Ji, Wenzhuo Wang, Shuo Chen, Chuming Lin, Yun Cao, Tong Lu, Donghao Luo, Ying Tai

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world images demonstrate that our proposed method sufficiently reduces blur kernel estimation error, thus enables the off-the-shelf non-blind SR methods to work under blind setting effectively, and achieves superior performance over state-of-the-art blind SR methods, averagely by 1.39d B, 0.48d B on commom blind SR setting (with Gaussian kernels) for scales 2 and 4 , respectively.
Researcher Affiliation Collaboration 1National Key Lab for Novel Software Technology, Nanjing University 2Tencent Youtu Lab 3RIKEN Center for Advanced Intelligence Project
Pseudocode No The paper describes the S2K network structure and its components but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not explicitly state that source code for the described methodology is released or provide a link to it.
Open Datasets Yes We use DIV2K [28] training set to train S2K and the non-blind SR model.
Dataset Splits Yes We use DIV2K [28] training set to train S2K and the non-blind SR model. Then, we use random blurred DIV2K validation set, 100 images in Flicker2K [1] as test datasets in the synthetic experiment, and conduct 2 , 3 , 4 SR comparison and kernel estimation accuracy comparison.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, or detailed computer specifications) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions using Adam for optimization but does not provide specific version numbers for software dependencies like programming languages, libraries, or frameworks.
Experiment Setup Yes We set the weights of loss item in generator as λ1 = 100, λ2 = 1, λ3 = 1 respectively. For optimization, we use Adam with β1 = 0.5, β2 = 0.999, and learning rate is 0.001.