Efficient Test-Time Adaptation for Super-Resolution with Second-Order Degradation and Reconstruction

Authors: Zeshuai Deng, Zhuokun Chen, Shuaicheng Niu, Thomas Li, Bohan Zhuang, Mingkui Tan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on newly synthesized corrupted DIV2K datasets with 8 different degradations and several real-world datasets, demonstrating that our SRTTA framework achieves an impressive improvement over existing methods with satisfying speed.
Researcher Affiliation Academia Zeshuai Deng1 Zhuokun Chen1 2 Shuaicheng Niu1 Thomas H. Li5 Bohan Zhuang3 Mingkui Tan1 2 4 1South China University of Technology, 2Pazhou Lab, 3ZIP Lab, Monash University, 4Key Laboratory of Big Data and Intelligent Robot, Ministry of Education, 5Peking University Shenzhen Graduate School
Pseudocode Yes Algorithm 1: The pipeline of the proposed Super-Resolution Test-Time Adaptation.
Open Source Code Yes The source code is available at https://github.com/Deng Zeshuai/SRTTA.
Open Datasets Yes Extensive experiments are conducted on newly synthesized corrupted DIV2K datasets with 8 different degradations and several real-world datasets, demonstrating that our SRTTA framework achieves an impressive improvement over existing methods with satisfying speed. The DIV2K [1] dataset.
Dataset Splits Yes Testing data. Following Image Net-C [18], we degraded 100 validation images from the DIV2K [1] dataset into eight domains.
Hardware Specification Yes To compare the inference times of different SR methods, we measure all methods on a TITAN XP with 12G graphics memory for a fair comparison.
Software Dependencies No The paper mentions software components such as Adam optimizer, EDSR, Diff JPEG (PyTorch implementation), ResNet-50, and OpenCV, but it does not specify version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes For the balance weight in Eqn. (6), we set α to 1. For the ratio of parameters to be frozen, we set the ρ to 0.50. For test-time adaptation, we use the Adam optimizer with the learning rate of 5e-5 for the pre-trained SR models. We set the batch size N to 32, and we randomly crop the test image into N patches of size 96x96 and 64x64 for x2 and x4 SR, and degrade them into second-order degraded patches. We perform S = 10 iterations of adaptation for each test image.