Learning Omni-Frequency Region-adaptive Representations for Real Image Super-Resolution
Authors: Xin Li, Xin Jin, Tao Yu, Simeng Sun, Yingxue Pang, Zhizheng Zhang, Zhibo Chen1975-1983
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiments endorse the effective, and scenario-agnostic nature of our OR-Net for Real SR. Extensive experiments on multiple Real SR benchmarks have validated the effectiveness and superiority of our OR-Net. |
| Researcher Affiliation | Academia | CAS Key Laboratory of Technology in Geo-spatial Information Processing and Application System, University of Science and Technology of China Hefei 230027, China {lixin666, jinxustc, yutao666, smsun20, pangyx, zhizheng}@mail.ustc.edu.cn, chenzhibo@ustc.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Here we evaluate our OR-Net on DReal SR (Wei et al. 2020). DReal SR dataset is collected by (Wei et al. 2020). We also evaluate our OR-Net on Real SR dataset (Cai et al. 2019) and traditional SISR datasets in Supplementary. |
| Dataset Splits | No | The paper specifies the number of training and testing images/patches but does not explicitly mention a validation dataset or its size/split information. "The training dataset contains 35,065, 26,118, and 30,502 image patches for scales of 2, 3 and 4, respectively. The testing dataset contains 83, 84, and 93 images for 2 4, respectively." |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., specific GPU models, CPU types, or memory). |
| Software Dependencies | No | The paper mentions that "The implementation of OR-Net is based on Py Torch framework." However, it does not specify any version numbers for PyTorch or other software dependencies. |
| Experiment Setup | Yes | In the training process, we utilize Adam optimizer with an initial learning rate of 0.0001 and the learning rate decay by a factor of 0.5 each epoch. Batch size is 8 and we leverage random flip, random rotation and random cropping to achieve data augmentation. We randomly crop the training image as 192 192. For FD module, we set the channels of three frequency branches as 128, 128 and 64 from low-frequency to high-frequency components. For RFA module, we set the number of basis kernels K as 5. L1 loss has been verified effective and been widely used in many super-resolution works (Lim et al. 2017; Zhang et al. 2018a). In this paper, we also utilize the L1 loss to optimize our OR-Net. |