DeSRA: Detect and Delete the Artifacts of GAN-based Real-World Super-Resolution Models
Authors: Liangbin Xie, Xintao Wang, Xiangyu Chen, Gen Li, Ying Shan, Jiantao Zhou, Chao Dong
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We exploit two state-of-the-art GAN-SR models, Real ESRGAN (Wang et al., 2021c) and LDL (Liang et al., 2022b), to validate the effectiveness of our method. We use the officially released model for each method to detect the GAN-inference artifacts. For finetuning, the training HR patch size is set to 256. The models are trained with 4 NVIDIA A100 GPUs with a total batch size of 48. We finetune the model only for 1000 iterations and the learning rate is 1e-4. |
| Researcher Affiliation | Collaboration | 1State Key Laboratory of Internet of Things for Smart City, University of Macau 2Shenzhen Key Lab of Computer Vision and Pattern Recognition, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences 3ARC Lab, Tencent PCG 4 Shanghai Artificial Intelligence Laboratory 5Platform Technologies, Tencent Online Video. Correspondence to: Chao Dong <chao.dong@siat.ac.cn>. |
| Pseudocode | No | The paper describes its method in detailed prose and explains the procedures, but it does not include any explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | The code will be available at https: //github.com/Tencent ARC/De SRA. |
| Open Datasets | Yes | Considering the diversity of both image content and degradations, we use the validation set of Image Net1K (Deng et al., 2009) as the real-world LR data. Then we choose 200 representative images with GAN-inference artifacts for each method to construct this GAN-SR artifact dataset. For the finetuning process, we further divide the dataset by using 50 pairs for training and 150 pairs for validation. DIV2K training dataset is also mentioned in Appendix A.2 for calculating adjustment weights. |
| Dataset Splits | Yes | For the finetuning process, we further divide the dataset by using 50 pairs for training and 150 pairs for validation. |
| Hardware Specification | Yes | The models are trained with 4 NVIDIA A100 GPUs with a total batch size of 48. |
| Software Dependencies | No | The paper mentions using Seg Former as a segmentation model but does not specify version numbers for any software libraries, frameworks, or programming languages used in the implementation or experimentation. |
| Experiment Setup | Yes | For finetuning, the training HR patch size is set to 256. The models are trained with 4 NVIDIA A100 GPUs with a total batch size of 48. We finetune the model only for 1000 iterations and the learning rate is 1e-4. We empirically set the threshold to 0.7. |