Video Frame Interpolation without Temporal Priors

Authors: Youjian Zhang, Chaoyue Wang, Dacheng Tao

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we introduce the datasets we used for training and test, and the training configuration of our models. Then we compare the proposed framework with state-of-the-art methods both quantitative and qualitative. Finally, we carry out an ablation study of our proposed components.
Researcher Affiliation Academia Youjian Zhang The University of Sydney, Australia yzha0535@uni.sydney.edu.au Chaoyue Wang The University of Sydney, Australia chaoyue.wang@sydney.edu.au Dacheng Tao The University of Sydney, Australia dacheng.tao@sydney.edu.au
Pseudocode No The paper describes its methods in prose and with diagrams, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Codes are available on https://github. com/yjzhang96/UTI-VFI.
Open Datasets Yes We apply the synthetic rule on both Go Pro dataset [19] and Adobe240 dataset [30], and name these synthetic datasets as dataset-m-n .
Dataset Splits No The paper describes how new datasets are synthesized and used for training and testing, but it does not specify explicit training, validation, and test splits with percentages or sample counts for these datasets. It mentions 'To train the key-states restoration network, we first train the network F1 for 200 epochs and jointly train the network F1 and F2 for another 200 epochs.' but no validation set details are provided.
Hardware Specification Yes In test phase, it takes 0.23s and 0.18s to run a single forward for key-states restoration network and interpolation network respectively via a NVIDIA Ge Force GTX 1080 Ti graphic card.
Software Dependencies No The paper mentions using "Adam [15] solver" for optimization and "PWC-Net [31]" for optical flow estimation, but it does not provide specific version numbers for any software dependencies (e.g., Python, TensorFlow, PyTorch, or the versions of Adam/PWC-Net libraries).
Experiment Setup Yes To train the key-states restoration network, we first train the network F1 for 200 epochs and jointly train the network F1 and F2 for another 200 epochs. To train the optical flow refinement network, 100 epochs are enough for convergence. We use Adam [15] solver for optimization, with β1 = 0.9, β2 = 0.999 and ϵ = 10 8. The learning rate is set initially to 10 4, and linearly decayed to 0. All weights are initialized using Xavier [8], and bias is initialized to 0.