Video Frame Interpolation Based on Deformable Kernel Region
Authors: Haoyue Tian, Pan Gao, Xiaojiang Peng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments are conducted on four datasets to demonstrate the superior performance of the proposed model in comparison to the state-of-the-art alternatives. |
| Researcher Affiliation | Academia | Haoyue Tian1 , Pan Gao1 , Xiaojiang Peng2 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics 2College of Big Data and Internet, Shenzhen Technology University {tianhy, pan.gao}@nuaa.edu.cn, xiaojiangp@gmail.com |
| Pseudocode | Yes | Algorithm 1 The Generation of ˆI(A) |
| Open Source Code | No | The paper states 'More qualitative results and ablation experiments are provided in supplementary material1. 1The supplementary material is available at http://arxiv.org/abs/2204.11396'. This link points to the paper itself on arXiv, not an external code repository or an explicit statement of code release for the methodology. |
| Open Datasets | Yes | We select Viemo90k [Xue et al., 2019] as the dataset and divide it into two parts, which are used for training and validating our proposed model respectively. One part contains 64,600 triples as training set, and the other part has 7,824 triples as the validation set, with a resolution of 448 256 per frame. |
| Dataset Splits | Yes | We select Viemo90k [Xue et al., 2019] as the dataset and divide it into two parts, which are used for training and validating our proposed model respectively. One part contains 64,600 triples as training set, and the other part has 7,824 triples as the validation set, with a resolution of 448 256 per frame. |
| Hardware Specification | Yes | During training, we set the batch size to 3, and deploy our experiment to RTX 2080Ti. |
| Software Dependencies | No | The paper mentions the use of PWC-Net and U-Net, and the Ada Max optimizer, but it does not specify versions for any software libraries or dependencies. |
| Experiment Setup | Yes | We adopt the Ada Max optimizer, where β1 and β2 are set as the default values 0.9 and 0.999, respectively. We set the initial learning rate to 0.002, and during training, if the validation loss does not decrease in 3 consecutive epochs, we reduce the learning rate by a factor of 0.2. ... During training, we set the batch size to 3, ... After training for around 100 epochs, the training loss has converged. |