WaveFormer: Wavelet Transformer for Noise-Robust Video Inpainting

Authors: Zhiliang Wu, Changchang Sun, Hanyu Xuan, Gaowen Liu, Yan Yan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate the superior performance of our method over state-of-the-art baselines both qualitatively and quantitatively.
Researcher Affiliation Collaboration Zhiliang Wu1, Changchang Sun2, Hanyu Xuan3*, Gaowen Liu4, Yan Yan2 1 CCAI, Zhejiang University, China 2 Department of Computer Science, Illinois Institute of Technology, USA 3 School of Big Data and Statistics, Anhui University, China 4 Cisco Research, USA
Pseudocode No The paper describes the proposed method with text and figures (e.g., Figure 1) but does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code No The paper does not explicitly state that its code is open-source, provide a link to a code repository, or mention code availability in supplementary materials for the described methodology.
Open Datasets Yes Two most commonly-used datasets are taken to verify the effectiveness of the proposed method, including Youtube-vos dataset (Xu et al. 2018) and DAVIS dataset (Perazzi et al. 2016).
Dataset Splits Yes The former contains 3,471, 474 and 508 video clips in training, validation and test set, respectively. The latter is composed of 60 video clips for training and 90 video clips for testing.
Hardware Specification Yes And the runtime is measured on a single Titan RTX GPU.
Software Dependencies No The paper does not specify version numbers for any key software components or libraries used in the implementation or experimentation.
Experiment Setup Yes In real implementation, we empirically set these three parameters as 3, 5 and 0.01.