Knowledge Distillation based Degradation Estimation for Blind Super-Resolution

Authors: Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, Luc Van Gool

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments under classic and real-world degradation settings. The results show that KDSR achieves SOTA performance and can generalize to various degradation processes.
Researcher Affiliation Collaboration Bin Xia1, Yulun Zhang2, Yitong Wang3, Yapeng Tian4, Wenming Yang1 , Radu Timofte5, and Luc Van Gool2 1Tsinghua University 2ETH Z urich 3Byte Dance Inc 4University of Texas at Dallas 5University of W urzburg
Pseudocode No The information is insufficient as the paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at Github.
Open Datasets Yes 800 images in DIV2K (Agustsson & Timofte, 2017) and 2,650 images in Flickr2K (Timofte et al., 2017) as the DF2K training set.
Dataset Splits Yes Since AIM19 and NTIRE2020 datasets provide a paired validation set, we use the LPIPS (Zhang et al., 2018b), PSNR, and SSIM for the evaluation.
Hardware Specification No The information is insufficient as the paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The information is insufficient as the paper does not provide specific ancillary software details (e.g., library names with version numbers) needed to replicate the experiment.
Experiment Setup Yes The batch sizes are set to 64, and the LR patch sizes are 64 64. We use Adam optimizer with β1 = 0.9, β2 = 0.99. We train both teacher and student networks with 600 epochs and set their initial learning rate to 10 4 and decrease to half after every 150 epochs. The loss coefficient λrec and λkd are set to 1 and 0.15 separately. For optimization, we use Adam with β1 = 0.9, β2 = 0.99. In both two stages of training, we set the batch size to 48, with the input patch size being 64.