Quadratic Video Interpolation
Authors: Xiangyu Xu, Li Siyao, Wenxiu Sun, Qian Yin, Ming-Hsuan Yang
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we first provide implementation details of the proposed model, including training data, network structure, and hyper-parameters. We then present evaluation results of our algorithm with comparisons to the state-of-the-art methods on video datasets. The source code, data, and the trained models are available at: https://sites.google.com/view/xiangyuxu/qvi_nips19. |
| Researcher Affiliation | Collaboration | Xiangyu Xu Carnegie Mellon University xuxiangyu2014@gmail.com Li Siyao Sense Time Research lisiyao1@sensetime.com Wenxiu Sun Sense Time Research sunwenxiu@sensetime.com Qian Yin Beijing Normal University yinqian@bnu.edu.cn Ming-Hsuan Yang University of California, Merced Google mhyang@ucmerced.edu |
| Pseudocode | No | The paper describes the algorithm steps in text and diagrams but does not include any formal 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The source code, data, and the trained models are available at: https://sites.google.com/view/xiangyuxu/qvi_nips19. |
| Open Datasets | Yes | We evaluate our model with the state-of-the-art video interpolation approaches, including the phase-based method (Phase) [17], separable adaptive convolution (Sep Conv) [21], deep voxel flow (DVF) [14], and Super Slo Mo [9]. ... high frame rate video datasets such as GOPRO [18] and Adobe240 [30]. We also conduct experiments on the UCF101 [29] and DAVIS [23] datasets... |
| Dataset Splits | No | The paper describes the datasets used for training and testing, and how data is prepared (e.g., 'randomly crop 352 352 patches for training'), but it does not specify explicit training/validation/test splits with percentages or counts for a distinct validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using specific optimizers and network architectures (e.g., 'Adam optimizer', 'PWC-Net', 'U-Net') but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | We initialize the learning rate as 10 4 and further decrease it by a factor of 0.1 at the end of the 100th and 150th epochs. The trade-off parameter λ of the loss function (7) is set to be 0.005. k in the activation function of δ is set to be 10. In the flow reversal layer, we set the Gaussian standard deviation σ = 1. We first train the proposed network with the flow estimation module fixed for 200 epochs, and then finetune the whole system for another 40 epochs. |