A Novel Learnable Interpolation Approach for Scale-Arbitrary Image Super-Resolution
Authors: Jiahao Chao, Zhou Zhou, Hongfan Gao, Jiali Gong, Zhenbing Zeng, Zhengfeng Yang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the proposed learnable interpolation requires much fewer parameters and outperforms state-of-the-art super-resolution methods. Our contributions can be summarized as follows: We conduct extensive experiments on five benchmark datasets. The results show that the network equipped with our plug-in module improves the PSNR value by 0.13d B on average and up to 0.27d B over the existing models with only a slight increase in the number of parameters and computational cost. |
| Researcher Affiliation | Academia | 1Shanghai Key Lab of Trustworthy Computing, East China Normal University, Shanghai, China; 2Shanghai University of Finance and Economics Zhejiang College, Jinhua, China {jhchao502, zhouzhou, hfgao, gongjl}@stu.ecnu.edu.cn, zbzeng@shu.edu.cn, zfyang@sei.ecnu.edu.cn |
| Pseudocode | No | The paper describes its method in prose and with block diagrams (e.g., Figure 1), but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using 'Basic SR [Wang et al., 2018a] framework' but does not provide an explicit statement or link for the open-source code of their specific proposed methodology (LIEDSR, LIRDN, LIRCAN). |
| Open Datasets | Yes | Following Meta-SR [Hu et al., 2019], we use the DIV2K [Timofte et al., 2017] dataset as our training dataset. For testing, we evaluate our model on five standard benchmark datasets, i.e., Set5 [Bevilacqua et al., 2012], Set14 [Zeyde et al., 2010], B100 [Martin et al., 2001], Urban100 [Huang et al., 2015] and Manga109 [Huang et al., 2015]. |
| Dataset Splits | No | The paper specifies the DIV2K dataset for training and several benchmark datasets for testing, but it does not explicitly provide details about a separate validation dataset split (e.g., percentages or specific samples allocated for validation) from these datasets or how it was used to reproduce the experiment. |
| Hardware Specification | Yes | The experiment is implemented on an NVIDIA RTX 3090 GPU with Py Torch [Paszke et al., 2019]. |
| Software Dependencies | No | The paper mentions 'CUDA' and 'Py Torch' as software used, but does not specify their version numbers (e.g., 'PyTorch 1.9' or 'CUDA 11.1'), which are required for a reproducible description of software dependencies. |
| Experiment Setup | Yes | We adopt two training strategies for the pretrain phase and the finetune phase [Park et al., 2020]. For each model, it is pretrained for 300K iterations and finetuned for 300K iterations. We set L1 loss between SR results and HR images as the loss function. For optimization, we use Adam [Kingma and Ba, 2015] with β1 = 0.9 and β2 = 0.999. In order to stabilize the training process, we use the exponential moving average (EMA) strategy. The initial learning rate is set to 1 10 4 and halved at 200K iterations for both the pretrain and finetune phases. |