Temporal Adaptive Alignment Network for Deep Video Inpainting

Authors: Ruixin Liu, Zhenyu Weng, Yuesheng Zhu, Bairong Li

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both quantitative and qualitative evaluation results show that our method significantly outperforms existing deep learning based methods. We conduct extensive experiments on Youtube-VOS [Xu et al., 2018] and DAVIS [Perazzi et al., 2016] datasets.
Researcher Affiliation Academia Ruixin Liu , Zhenyu Weng , Yuesheng Zhu and Bairong Li Communication and Information Security Lab, Shenzhen Graduate School, Peking University
Pseudocode No The paper does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes We conduct extensive experiments on Youtube-VOS [Xu et al., 2018] and DAVIS [Perazzi et al., 2016] datasets.
Dataset Splits Yes The first is You Tube-VOS [Xu et al., 2018] Dataset... It contains 4,453 You Tube video clips and 94 object categories and is split into 5,471 for training, 474 for validation and 508 for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory) used for running experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes We select five reference frames(Xt 4, Xt 2, Xt 1, Xt+2, Xt+4) and resize them into 256 256 as inputs when training the network. To accelerate the training process while reducing over-fitting, we initialize parameters of our neural network by using the initialization method in [He et al., 2015]. Adam optimizer with the initial learning rate to 10 4 is utilized, we decayed the learning rate by 0.1 every 1 million iterations. For our experiments, the loss term weights are adopted from [Liu et al., 2018].