Time-Agnostic Prediction: Predicting Predictable Video Frames
Authors: Dinesh Jayaraman, Frederik Ebert, Alexei Efros, Sergey Levine
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach for future and intermediate frame prediction across three robotic manipulation tasks. |
| Researcher Affiliation | Academia | Dinesh Jayaraman UC Berkeley Frederik Ebert UC Berkeley Alyosha Efros UC Berkeley Sergey Levine UC Berkeley |
| Pseudocode | No | The paper describes network architectures and loss functions in detail but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | Note that more supplementary material, such as video examples, is hosted at: https://sites.google.com/view/ ta-pred (The provided URL links to video examples and related work, but not the source code for this specific paper's methodology or implementation). |
| Open Datasets | Yes | Finally, we test on BAIR pushing (Ebert et al., 2017), a real-world dataset that is commonly used in visual prediction tasks. |
| Dataset Splits | No | 5% of the data is set aside for testing. (No explicit mention of a separate validation split or the percentage for training data). |
| Hardware Specification | Yes | The NVIDIA DGX-1 used for this research was donated by the NVIDIA Corporation. |
| Software Dependencies | No | The paper mentions software like MuJoCo for data generation and implicitly uses PyTorch (via references to DCGAN architecture in Appendix C) but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For this pretraining, we use learning rate 0.0001 for 10 epochs with batch size 64 and Adam optimizer. Thereafter, for training, we use learning rate 0.0001 for 200 epochs with batch size 64 and Adam optimizer. |