Building an End-to-End Spatial-Temporal Convolutional Network for Video Super-Resolution
Authors: Jun Guo, Hongyang Chao
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of the proposed approach is highlighted on two datasets, where we observe substantial improvements relative to the state of the arts.In this section, we present experimental results to demonstrate the effectiveness of the proposed STCN for VSR. |
| Researcher Affiliation | Academia | Jun Guo and Hongyang Chao School of Data and Computer Science, and SYSU-CMU Shunde International Joint Research Institute, Sun Yat-sen University, Guangzhou, People s Republic of China |
| Pseudocode | No | The paper describes its model architecture and components through text and mathematical formulations (e.g., equations for CNN layers and C-LSTM), but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement regarding the release of its source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | To further increase credibility, we also conduct experiments on the larger Hollywood2 Scenes dataset (Marszalek, Laptev, and Schmid 2009). |
| Dataset Splits | Yes | The training set is built by collecting 160 video sequences from 26 high-quality 1080p HD video clips.Hollywood2 Scenes dataset... having 570 training sequences and 582 test sequences.For all experiments, the validation set is built by re-using frames that are trimmed from the training set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, or cloud computing platforms) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Adam for optimization but does not provide specific software dependencies like programming languages, libraries, or their version numbers. |
| Experiment Setup | Yes | We adopt Adam (Kingma and Ba 2014) to train our STCN. We begin with a step size 10 4 and then decrease it by a factor of 10 when the validation error stops improving. The step size of Adam is reduced twice prior to termination.We train a speciļ¬c network with batch size 64 for each upscaling factor.The layer numbers of the spatial and temporal components, i.e., LS and LT , are set to 20 and 3, respectively. |