Video Prediction with Appearance and Motion Conditions
Authors: Yunseok Jang, Gunhee Kim, Yale Song
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model using facial expression and human action datasets and report favorable results compared to existing methods. |
| Researcher Affiliation | Collaboration | Yunseok Jang 1 2 Gunhee Kim 2 Yale Song 3 [...] 1University of Michigan, Ann Arbor 2Seoul National University 3Microsoft AI & Research. |
| Pseudocode | Yes | Algorithm 1 summarizes how we train our model. |
| Open Source Code | Yes | The code is available at http://vision.snu.ac.kr/projects/amc-gan. |
| Open Datasets | Yes | We evaluate our approach on the MUG facial expression dataset (Aifanti et al., 2010) and the NATOPS human action dataset (Song et al., 2011). |
| Dataset Splits | Yes | We train the classifier on real training data, using roughly 10% for validation, and test it on generated videos from different methods. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks with their respective versions. |
| Experiment Setup | Yes | We use the ADAM optimizer (Kingma & Ba, 2015) with learning rate 2e-4. For the cross entropy losses, we adopt the label smoothing trick (Salimans et al., 2016) with a weight decay of 1e-5 per mini-batch (Arjovsky & Bottou, 2017). |