IODA: Instance-Guided One-shot Domain Adaptation for Super-Resolution

Authors: Zaizuo Tang, Yu-Bin Yang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3 Experiments 3.1 Experiment details 3.2 Ablation experiment 3.3 Comparative experiment
Researcher Affiliation Academia Zai-Zuo Tang, Yu-Bin Yang State Key Laboratory for Novel Software Technology Nanjing University Nanjing, China tangzz@smail.nju.edu.cn, yangyubin@nju.edu.cn
Pseudocode No No pseudocode or algorithm blocks found.
Open Source Code Yes Our code is available at https://github.com/Zaizuo Tang/IODA.
Open Datasets Yes The experiments validate the effectiveness of the proposed IODA method using the Real SR [2], DF2K [1], GTA [30], Cityscapes [31], and ACDC [32] datasets.
Dataset Splits No For experimental data selection, we select a single LR image from the target domain dataset for network one-shot domain adaptation training. The entire target domain dataset is used as the test set for evaluating network performance.
Hardware Specification Yes For the experimental devices, a 2080ti GPU is utilized for domain adaptation training of the SFAMN and SRFormer networks. Due to the larger memory demands of the HAT network, a V100 GPU is employed as the training device.
Software Dependencies No The paper mentions specific tools and models (e.g., Alpha-CLIP [23], SAM model [29]) but does not provide version numbers for these or other software dependencies.
Experiment Setup Yes For experimental data selection, we select a single LR image from the target domain dataset for network one-shot domain adaptation training. The entire target domain dataset is used as the test set for evaluating network performance. It is worth noting that to mitigate the uncertainty in network performance caused by random sample selection, we repeat the IODA-related experiments five times. Each time, a different single sample is selected from the target domain dataset for adaptation training, and the results are presented as mean and variance.