SVFI: Spiking-Based Video Frame Interpolation for High-Speed Motion

Authors: Lujie Xia, Jing Zhao, Ruiqin Xiong, Tiejun Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show SVFI outperforms the SOTA methods on wide variety of datasets. For instance, in 7 and 15 frame skip evaluations, it shows up to 5.58 d B and 6.56 d B improvements in terms of PSNR over the corresponding second best methods BMBC and DAIN.
Researcher Affiliation Academia Lujie Xia1,2, Jing Zhao1,2,3, Ruiqin Xiong1,2*, Tiejun Huang1,2,4 1National Engineering Research Center of Visual Technology (NERCVT), Peking University 2Institute of Digital Media, School of Computer Science, Peking University 3National Computer Network Emergency Response Technical Team/Coordination Center of China 4 Beijing Academy of Artificial Intelligence {lujie.xia, jzhaopku, rqxiong, tjhuang}@pku.edu.cn
Pseudocode No The paper includes architectural diagrams (e.g., Figure 3) but does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 1Code is available at https://github.com/Bosserhead/SVFI
Open Datasets Yes Training Datasets. Vimeo90k (interpolation)(Xue et al. 2019) is used for training the proposed network. ... High Speed Event and RGB camera (HS-ERGB) dataset(Tulyakov et al. 2021) which contains RGB frames and the corresponding event stream is used as one of the testing datasets.
Dataset Splits No The paper mentions “Vimeo90k (interpolation)” for training and other datasets for testing, but it does not specify explicit percentages, counts, or predefined splits for training, validation, and test sets required for reproducibility.
Hardware Specification Yes We train our network on two NVIDIA Tesla V100 GPUs
Software Dependencies No The paper states “All experiments of our work are implemented using the Pytorch framework” but does not specify the version number for PyTorch or any other software libraries, compilers, or operating systems.
Experiment Setup Yes For training, we use Adam optimizer(Kingma and Ba 2014) with default setting and the batch size is set to 8. SVFI is trained via total of 130 epochs with initial learning rate of 10 4, reduced by a factor of 2 at [90, 110]-th epoch. The hyperparameter λ of loss function is set to 0.5.