Learning Re-sampling Methods with Parameter Attribution for Image Super-resolution

Authors: Xiaotong Luo, Yuan Xie, Yanyun Qu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on publicly available datasets demonstrate that our proposal can effectively boost the performance of baseline methods from the data re-sampling view.
Researcher Affiliation Academia Xiaotong Luo1, Yuan Xie2, Yanyun Qu1 1School of Informatics, Xiamen University, Fujian, China 2School of Computer Science and Technology, East China Normal University, Shanghai, China 2Chongqing Institute of East China Normal University, Chongqing, China
Pseudocode Yes Algorithm 1 The bi-sampling parameter attribution for compact image SR.
Open Source Code No The paper does not contain an explicit statement about the release of open-source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We use DIV2K [1] to train the SR models, which is a high-quality dataset widely used for image SR. The whole dataset includes 800 training images and 100 validation images totally with diverse contents and texture details. The LR images are obtained in the same way as [52, 15]. To demonstrate the effectiveness of our method, the SR models are also evaluated on five public SR benchmark datasets: Set5 [3], Set14 [48], B100 [2], Urban100 [14] and Manga109 [29].
Dataset Splits Yes The whole dataset includes 800 training images and 100 validation images totally with diverse contents and texture details.
Hardware Specification Yes All the experiments are conducted with Py Torch framework on NVIDIA 2080Ti GPUs.
Software Dependencies No The paper mentions 'Py Torch framework' but does not specify a version number or other software dependencies with specific versions.
Experiment Setup Yes During training, we fix the patch size of the HR image as 128 128 for 2 , 4 SR, and 129 129 for 3 SR. We use Adam optimizer with β1 = 0.9, β2 = 0.999 to train the SR models. The mini-batch size is set to 16. The learning rate is initialized as 2e 4 and reduced by half per 200 epochs for 400 epochs totally. The unbalanced factor for the inverse sampling data is set to 10 and β is set to 0.1. The interval of alternate training is 50 epochs and the number of classes of inverse sampling for DIV2K training dataset is 10.