AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
Authors: Yanze Wu, Xintao Wang, GEN LI, Ying Shan
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments 5.1 Training Details 5.2 Comparisons with Previous Methods 5.3 Ablation Studies and Discussions |
| Researcher Affiliation | Industry | 1ARC Lab, Tencent PCG 2Platform Technologies, Tencent Online Video {yanzewu, xintaowang, enochli, yingsshan}@tencent.com |
| Pseudocode | No | The paper describes methods in prose and with architectural diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes and models are available at https://github.com/Tencent ARC/Anime SR. |
| Open Datasets | Yes | We compare models trained with our AVC dataset and ATD-12K dataset. |
| Dataset Splits | No | The paper defines AVC-Train and AVC-Test sets, but does not explicitly mention a separate validation dataset split. |
| Hardware Specification | Yes | All the training is performed with Py Torch on four NVIDIA A100 GPUs in an internal cluster. |
| Software Dependencies | No | All the training is performed with Py Torch on four NVIDIA A100 GPUs in an internal cluster. No specific version numbers for software dependencies are provided. |
| Experiment Setup | Yes | We use the Adam optimizer [25] with a learning rate of 2 10 4 for the first stage and a learning rate of 1 10 4 for the second stage. We set the batch size per GPU, frame sequence length, and patch size of the HR frames to 4, 15, and 256, respectively. |