Frame and Feature-Context Video Super-Resolution
Authors: Bo Yan, Chuming Lin, Weimin Tan5597-5604
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations and comparisons demonstrate that our approach produces state-of-the-art results on a standard benchmark dataset, with advantages in terms of accuracy, efficiency, and visual quality over the existing approaches. |
| Researcher Affiliation | Academia | Bo Yan, Chuming Lin, Weimin Tan School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University {byan, cmlin17, wmtan14}@fudan.edu.cn |
| Pseudocode | Yes | This processing flow is summarized in Algorithms 1. Algorithm 1 Frame and Feature-Context Video Super Resolution |
| Open Source Code | No | The paper does not provide concrete access to its source code, nor does it explicitly state that its code is open-source or available. |
| Open Datasets | Yes | Our training dataset consists of 2 high-resolution videos (4k, 60fps): Venice and Myanmar downloaded from harmonic1. [footnote 1: https://www.harmonicinc.com/free-4k-demo-footage/] and standard Vid4 benchmark dataset (Liu and Sun 2011) |
| Dataset Splits | No | The paper mentions a training dataset and a benchmark dataset used for evaluation but does not specify explicit training, validation, and test splits with percentages or counts. |
| Hardware Specification | Yes | All experiments are carried out for 4x upscaling. We conduct our experiments on a machine with an Intel i7-7700k CPU and an Nvidia GTX 1080Ti GPU. |
| Software Dependencies | No | Our framework is implemented on the Tensor Flow platform. (No version number specified for TensorFlow or any other software dependency). |
| Experiment Setup | Yes | The parameters are updated with initial learning rate of 10 4 before 300K iteration steps and changed to 10 5 at the following 50K. The loss is minimized using Adam optimizer (Kingma and Ba 2015) and back-propagated through both networks NETL and NETC as well as through time. |