Few-shot Human Motion Prediction via Learning Novel Motion Dynamics

Authors: Chuanqi Zang, Mingtao Pei, Yu Kong

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our method achieves better performance over state-of-the-art methods in motion prediction.
Researcher Affiliation Academia 1Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, China 2Golisano College of Computing and Information Sciences, Rochester Institute of Technology, USA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We evaluate the effectiveness of our Mo Pred Net for few-shot motion prediction on two popular human motion datasets:1) Human 3.6M dataset [Ionescu et al., 2013] and 2) CMU MOCAP.
Dataset Splits No The paper describes training and testing phases and data usage within those, but does not explicitly provide specific validation dataset split information (e.g., percentage, sample count, or specific validation set name).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In our Motion Prediction Network (Mo Pred Net), pose encoder consists of whole past sequence encoder and masked motion sequence encoder with the longest mask K set as 20 and mask threshold γ set as 0.0666. ... The feature channels of each convolution layer are set as 16, 64, 128, 256 respectively. ... The pose decoder network contains three fully-connected layers with the size of 512, 128, and 54, respectively. Leaky Re LU action function and drop out are both set as 0.5 in the first two layers. ... We adopt the ADAM optimizer with the initial learning rate set as γ1 = γ2 = 1e 4, γ3 = 1e 8. The initial sampling rate and decayed rate are set as 0.8 and 0.7, respectively. ... The input window is set as 50 frames (2s), and the output window is set to 25 frames (1s) for training...