Residual Invertible Spatio-Temporal Network for Video Super-Resolution
Authors: Xiaobin Zhu, Zhuangzi Li, Xiao-Yu Zhang, Changsheng Li, Yaqi Liu, Ziyu Xue5981-5988
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on public benchmark datasets demonstrate that RISTN outperforms the state-of-the-art methods. |
| Researcher Affiliation | Academia | Xiaobin Zhu,1,2 Zhuangzi Li,2 Xiao-Yu Zhang,3 Changsheng Li,4 Yaqi Liu,3 Ziyu Xue5 1School of Computer and Communication Engineering, University of Science and Technology Beijing 2School of Computer and Information Engineering, Beijing Technology and Business University 3Institute of Information Engineering, Chinese Academy of Sciences 4University of Electronic Science and Technology of China 5Information Technology Institute, Academy of Broadcasting Science, NRTA, China |
| Pseudocode | No | The paper describes the architecture and processes using text and diagrams, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In our approach, the randomly selected 50,000 images from Image Net are adopted for the spatial network pre-training. ... The public available benchmark dataset of Vide4 (Liu and Sun 2011) is used to demonstrate the performance of the RISTN. |
| Dataset Splits | No | The paper mentions using "5 consecutive video frames in each sequence are used for training" from the collected videos and "The public available benchmark dataset of Vide4 (Liu and Sun 2011) is used to demonstrate the performance". However, it does not provide explicit details about train/validation/test splits, such as percentages, sample counts, or references to predefined splits for reproduction. |
| Hardware Specification | Yes | Experiments are performed on a NVIDIA Titan Xp GPU. |
| Software Dependencies | No | The paper mentions using Adam for optimization but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We randomly crop the 200 200 patch in each frame as the ground truth, and downsample it to 50 50 as the input LR patch for training. ... RISTN end-to-end is optimized by Adam with the learning rate 0.0001. The λ of L1 regular term is set as 5 10 7. The training process is stopped when the training reaches 400 epochs and we select the best model for comparison. |