Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis

Authors: Yi Zhou, Zimo Li, Shuangjiu Xiao, Chong He, Zeng Huang, Hao Li

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our synthesized motion results on different networks trained on each of four distinct subsets from the CMU motion capture database: martial arts, Indian dance, Indian/salsa hybrid, and walking. An anonymous video of the results can be found here: https://youtu.be/Fun Mxjm DIQM. Quantitative Results. Table 1 shows the prediction error as Euclidean distance from the ground truth for different motion styles at various time frames. We compare with a 3-layer LSTM (LSTM-3LR), ERD (Fragkiadaki et al., 2015), the seq2seq framework of (Martinez et al., 2017) as well as scheduled sampling (Bengio et al., 2015).
Researcher Affiliation Collaboration 1University of Southern California 2Shanghai Jiao Tong University 3USC Institute for Creative Technologies 4Pinscreen
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper does not provide any link to open-source code for the described methodology or explicitly state that the code is available.
Open Datasets Yes We use the publicly available CMU motion-capture dataset for our experiments.
Dataset Splits No The paper mentions using a 'test set' but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., percentages, sample counts, or predefined splits).
Hardware Specification Yes We train with a sequence length of 100 for 500000 iterations using the ADAM backpropogation algorithm (Kingma & Ba, 2014) on an NVIDIA 1080 GPU for each dataset we experiment on.
Software Dependencies No We implement the training using the python caffe framework (Jia et al., 2014). The paper mentions the software 'python caffe framework' but does not provide specific version numbers for Python or Caffe.
Experiment Setup Yes We train the ac LSTM with three fully connected layers with a memory size of 1024... In the main body of the paper, we set u = v = 5... We train with a sequence length of 100 for 500000 iterations using the ADAM backpropogation algorithm... The initial learning rate is is set to 0.0001.