Frequency Consistent Adaptation for Real World Super Resolution

Authors: Xiaozhong Ji, Guangpin Tao, Yun Cao, Ying Tai, Tong Lu, Chengjie Wang, Jilin Li, Feiyue Huang1664-1672

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that the proposed FCA improves the performance of the SR model under real-world setting achieving state-of-the-art results with high fidelity and plausible perception, thus providing a novel effective framework for realworld SR application. Experiments
Researcher Affiliation Collaboration 1National Key Lab for Novel Software Technology, Nanjing University 2Tencent Youtu Lab
Pseudocode No The paper describes the method and framework using text and diagrams, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link to source code or an explicit statement about its public availability.
Open Datasets Yes For synthetic experiments, we select the widely used DIV2K (Timofte et al. 2017) dataset, including 800 training samples and 100 validation samples as benchmark. For real-world experiment, we use the DPED (Ignatov et al. 2017) dataset containing 5,614 training and 100 testing images.
Dataset Splits Yes For synthetic experiments, we select the widely used DIV2K (Timofte et al. 2017) dataset, including 800 training samples and 100 validation samples as benchmark. For real-world experiment, we use the DPED (Ignatov et al. 2017) dataset containing 5,614 training and 100 testing images.
Hardware Specification No The paper does not specify any particular hardware components (e.g., GPU model, CPU model, memory amount) used for running experiments, only general statements about training.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the implementation.
Experiment Setup Yes The input size of adaptation generator is 512 512, and the scale factor is 4 which is the same as the SR factor. Gaussian kernels are of size 13 13 with maximum variance 9. The down-/up-sampling scale factor during curriculum learning is decreasing from 3.5 to 1.2. In Ltotal, we set λ1 = 1, λ2 = 0.001.