SEAL: A Framework for Systematic Evaluation of Real-World Super-Resolution

Authors: Wenlong Zhang, Xiaohui Li, Xiangyu Chen, Xiaoyun Zhang, Yu Qiao, Xiao-Ming Wu, Chao Dong

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Under SEAL, we benchmark existing real-SR methods, obtain new observations and insights into their performance, and develop a new strong baseline. In this work, we establish a systematic evaluation framework for real-SR, namely SEAL, which assesses relative, distributed, and overall performance rather than relying solely on absolute, average, and misleading evaluation strategy commonly used in current evaluation methods. Section 5 is titled EXPERIMENTS and contains numerous tables and figures showing empirical results.
Researcher Affiliation Academia 1The Hong Kong Polytechnic University, 2Shanghai AI Laboratory 3Shanghai Jiao Tong University 4University of Macau 5Shenzhen Institute of Advanced Technology, CAS
Pseudocode Yes Algorithm 1 Image degradation clustering
Open Source Code Yes The source code is available at https://github.com/XPixelGroup/SEAL
Open Datasets Yes We take Set14 (Zeyde et al., 2010) and DIV2K val (Lim et al., 2017) to construct the test sets for systematic evaluation, denoted as Set14-SE and DIV2K val-SE, respectively. We use the 100 representative degradation parameters to synthesize 100 training datasets based on DIV2K.
Dataset Splits Yes We take Set14 (Zeyde et al., 2010) and DIV2K val (Lim et al., 2017) to construct the test sets for systematic evaluation, denoted as Set14-SE and DIV2K val-SE, respectively. we randomly add degradations to images in the DIV2K (Agustsson & Timofte, 2017) validation set to construct a single real-DIV2K val set.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU or CPU models, or cloud resources.
Software Dependencies No The paper mentions the use of the Adam optimizer but does not specify versions for any programming languages, libraries, or other software dependencies required to reproduce the experiments.
Experiment Setup Yes The models within the model zoo are initially pre-trained under the real-SR setting. Subsequently, they undergo a fine-tuning process consisting of a total of 2 × 10^5 iterations. The Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9 and β2 = 0.99 is used for training. The initial learning rate is 2 × 10^-4. We adopt L1 loss to optimize the networks.