Video Prediction via Example Guidance
Authors: Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Trevor Darrell
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on several widely used datasets, including moving digit (Srivastava et al., 2015), robot arm motion (Finn et al., 2016), and human activity (Zhang et al., 2013). Considerable enhancement is observed both in quantitative and qualitative aspects. |
| Researcher Affiliation | Academia | 1Shanghai Jiao Tong University 2University of California, Berkeley. Correspondence to: Bingbing Ni <nibingbing@sjtu.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Example Guided Video Prediction |
| Open Source Code | Yes | Project Page: https://sites.google.com/view/vpeg-supp/home. |
| Open Datasets | Yes | We evaluate our model with three widely used video prediction datasets: (1) Moving Mnist (Srivastava et al., 2015), (2) Bair Robot Push (Ebert et al., 2017) and (3) Penn Action (Zhang et al., 2013). |
| Dataset Splits | No | The paper references datasets and mentions 'training' and 'testing' phases but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or explicit standard split citations). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper discusses various models and architectures (e.g., Conv-LSTM, GAN, VAE, LSTM) but does not provide specific version numbers for any software dependencies or libraries used for implementation. |
| Experiment Setup | Yes | In all experiments we empirically set N = K = 5. |