Deep Video Frame Interpolation Using Cyclic Frame Generation

Authors: Yu-Lun Liu, Yi-Tung Liao, Yen-Yu Lin, Yung-Yu Chuang8794-8802

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both qualitative and quantitative experiments demonstrate that our model outperforms the state-of-the-art methods. The source codes of the proposed method and more experimental results will be available at https://github.com/alex04072000/CyclicGen. ... We evaluate the performance of our method on three benchmarks, including the UCF101 dataset (Soomro, Zamir, and Shah 2012), a high-quality video, See You Again, (Niklaus, Mai, and Liu 2017b), and the Middlebury optical flow dataset (Baker et al. 2011).
Researcher Affiliation Collaboration Yu-Lun Liu,1,2,3 Yi-Tung Liao,1,2 Yen-Yu Lin,1 Yung-Yu Chuang1,2 1Academia Sinica, 2National Taiwan University, 3Media Tek
Pseudocode No The paper describes the steps of the approach but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source codes of the proposed method and more experimental results will be available at https://github.com/alex04072000/CyclicGen.
Open Datasets Yes We train our model using the training set of the UCF101 dataset (Soomro, Zamir, and Shah 2012). ... We test the proposed network on several datasets, including UCF101 (Soomro, Zamir, and Shah 2012), Middlebury flow benchmark (Baker et al. 2011), and a high-quality You Tube video: See You Again by Wiz Khalifa.
Dataset Splits Yes The weights λc and λm are determined empirically using a validation set.
Hardware Specification No The paper does not mention any specific hardware details like GPU/CPU models or other computing infrastructure used for experiments.
Software Dependencies No The paper mentions building on DVF and using Adam optimizer but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes The batch size is set to 8, while the learning rate is fixed to 0.0001 during the first stage and reduced to 0.00001 during the second stage optimization.