Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
Authors: Yanbo Wang, Shaohui Lin, Yanyun Qu, Haiyan Wu, Zhizhong Zhang, Yuan Xie, Angela Yao
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. |
| Researcher Affiliation | Academia | 1East China Normal University 2Xiamen University 3National University of Singapore |
| Pseudocode | Yes | Algorithm 1 Pseudocode of CSD in a Py Torch-like style |
| Open Source Code | Yes | Code is available at https://github.com/Booooooooooo/CSD. |
| Open Datasets | Yes | We train all SR models with 800 training images on DIV2K and evaluate on the 100 validation images. We additionally test on four SR benchmarks: Set5[Bevilacqua et al., 2012], Set14[Zeyde et al., 2010], BSD100[Martin et al., 2001] and Urban100[Huang et al., 2015]. |
| Dataset Splits | Yes | We train all SR models with 800 training images on DIV2K and evaluate on the 100 validation images. |
| Hardware Specification | Yes | Our CSD scheme is implemented by Py Torch 1.2.0 and Mind Spore 1.2.0[Huawei, 2020] with one NVIDIA TITAN RTX GPU. |
| Software Dependencies | Yes | Our CSD scheme is implemented by Py Torch 1.2.0 and Mind Spore 1.2.0[Huawei, 2020] |
| Experiment Setup | Yes | The models are trained with ADAM optimizer by setting β1 = 0.9, β2 = 0.999, and ϵ = 10 8. The batch size and total epochs are set to 16 and 300 epochs, respectively. The initial learning rate is 10 4 and decayed by 10 at every 2 105 iterations. |