Arbitrary-Scale Video Super-resolution Guided by Dynamic Context
Authors: Cong Huang, Jiahao Li, Lei Chu, Dong Liu, Yan Lu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of our method in terms of performance and speed on arbitrary-scale VSR. We train our model on REDS (Nah et al. 2019) and Vimeo90K (Xue et al. 2019). For REDS (Nah et al. 2019), we use REDS4 as testset. |
| Researcher Affiliation | Collaboration | Cong Huang1*, Jiahao Li2, Lei Chu2, Dong Liu1, Yan Lu2 1University of Science and Technology of China, 2 Microsoft Research Asia |
| Pseudocode | No | The paper provides figures illustrating the framework and equations, but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured steps formatted like code. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We train our model on REDS (Nah et al. 2019) and Vimeo90K (Xue et al. 2019). |
| Dataset Splits | No | The paper specifies the test sets used (REDS4, Vimeo-90K-T, Vid4, UDM10) but does not explicitly mention validation sets or specific training/validation/test splits by percentage or sample count. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or cloud instance specifications). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed for reproducibility. |
| Experiment Setup | No | The paper mentions 'implementation details' are in the appendix ('The implementation details and the result about BD degradation is in the appendix.'), but the appendix content is not provided. The main text does not contain specific hyperparameters, training configurations, or system-level settings. |