Real-World Image Super-Resolution as Multi-Task Learning

Authors: Wenlong Zhang, Xiaohui Li, Guangyuan SHI, Xiangyu Chen, Yu Qiao, Xiaoyun Zhang, Xiao-Ming Wu, Chao Dong

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate our method achieves significantly enhanced performance across a wide range of degradation scenarios. The source code is available at https://github.com/XPixel Group/TGSR.
Researcher Affiliation Academia Wenlong Zhang1,2, Xiaohui Li2,3, Guangyuan Shi1, Xiangyu Chen2,4,5 Xiaoyun Zhang3, Yu Qiao2,5, Xiao-Ming Wu1 , Chao Dong2,5 1The Hong Kong Polytechnic University 2Shanghai AI Laboratory 3Shanghai Jiao Tong University 4University of Macau 5Shenzhen Institute of Advanced Technology, CAS
Pseudocode Yes Algorithm 1: Degradation Task Grouping for Real-SR
Open Source Code Yes The source code is available at https://github.com/XPixel Group/TGSR.
Open Datasets Yes We employ DIV2K [1], Flickr2K [1] and Outdoor Scene Training [40] datasets to implement our task grouping algorithm and train the TGSR network.
Dataset Splits Yes For evaluation, we use DIV2K validation set to construct a DIV2K5G dataset consisting of 5 validation sets according to the divided 5 different degradation groups by the task grouping algorithm, as shown in Fig. 3. Each validation set contains 100 image pairs.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using components like 'Real ESRGAN', 'VGG19 network', and 'U-Net discriminator' but does not specify software versions for programming languages, libraries, or frameworks (e.g., Python version, PyTorch version, CUDA version).
Experiment Setup Yes The single-task network is fine-tuned from the pre-trained Real ESRNet model for 100 iterations. The performance indicator is computed based on the average of the last 10 iterations considering the instability of the training procedure. ... we fine-tune the pre-trained Real ESRNet for 1 × 10^4 iterations based on all unsatisfactory tasks. ... The loss weights w1, w2, and w3 are set to 1, 1, and 0.1, respectively.