Mitigating Artifacts in Real-World Video Super-resolution Models
Authors: Liangbin Xie, Xintao Wang, Shuwei Shi, Jinjin Gu, Chao Dong, Ying Shan
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments Setup Training Settings. We train our Fast Real VSR on the REDS (Nah et al. 2019) dataset. The following degradation model is adopted to synthesize training data: ... The quantitative results on Video LQ are shown in Tab. 3. |
| Researcher Affiliation | Collaboration | Liangbin Xie*1,2,3, Xintao Wang3, Shuwei Shi1,4, Jinjin Gu5,6, Chao Dong 1,6, Ying Shan3 1 The Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Advanced Sciences 2 University of Macau 3 ARC Lab, Tencent PCG 4 Shenzhen International Graduate School, Tsinghua University 5 The University of Sydney 6 Shanghai Artificial Intelligence Laboratory |
| Pseudocode | No | The paper includes flowcharts but does not contain any sections explicitly labeled as 'Pseudocode' or 'Algorithm', nor are there any structured, code-like algorithm blocks. |
| Open Source Code | Yes | Codes will be available at https://github.com/Tencent ARC/Fast Real VSR. |
| Open Datasets | Yes | We train our Fast Real VSR on the REDS (Nah et al. 2019) dataset. |
| Dataset Splits | No | The paper mentions using the REDS dataset for training and Video LQ for testing but does not explicitly specify the training, validation, and test dataset splits (e.g., percentages or sample counts). |
| Hardware Specification | Yes | Runtime is computed with an output size of 720 1280, with an NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions using PyTorch and the Adam optimizer but does not provide specific version numbers for these or any other ancillary software components used in the experiments. |
| Experiment Setup | Yes | In the first stage, we adopt the Unidirectional Recurrent Network (URN) shown in Fig. 6, and train it for 300K iterations with the L1 loss. The batch size and learning rate are set to 16 and 10 4. In the second stage, we equip URN with the proposed HSA module to get the network Fast Real VSR. We employ the pre-trained MSE model for initialization. Then we train Fast Real VSR for 70K iterations with a combination of L1 loss, perceptual loss (Johnson, Alahi, and Fei-Fei 2016) and GAN loss (Goodfellow et al. 2014), whose loss weights are set to 1, 1, 5 10 2, respectively. |