Long-Term Human Motion Prediction by Modeling Motion Context and Enhancing Motion Dynamics

Authors: Yongyi Tang, Lin Ma, Wei Liu, Wei-Shi Zheng

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed model can promisingly forecast the human future movements, which yields superior performances over related state-of-the-art approaches.
Researcher Affiliation Collaboration Yongyi Tang1 , Lin Ma2 , Wei Liu2, Wei-Shi Zheng3 1School of Electronics and Information Technology, Sun Yat-sen University 2Tencent AI Lab 3School of Data and Computer Science, Sun Yat-sen University
Pseudocode No The paper describes the proposed model using equations and diagrams but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology, nor does it include any links to a code repository.
Open Datasets Yes We conducted our experiments of human motion prediction on the H3.6m mocap Dataset [Ionescu et al., 2014], which is the largest human motion dataset for 3D body pose analysis.
Dataset Splits No The paper states: '5 subjects were selected for testing with the others for training.', indicating a train/test split, but it does not explicitly mention or detail a separate validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory, or specific computing environments) used for running the experiments.
Software Dependencies No The paper describes algorithms and functions used (e.g., RNN, LSTM, GRU, ReLU, MSE, stochastic gradient descent) but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, CUDA 11.x).
Experiment Setup Yes Single layer of MHU with 1024 units was adopted in all our experiments. ... We used T = 30 observed frames for embedding to estimate future T = 10 frames. ... We used stochastic gradient descent with the momentum setting to 0.9. The learning rate was set to 0.05 decayed with factor of 0.95 for every 10,000 steps. And the gradient was clipped to a maximum L2-norm of 5. Batch size of 80 was used throughout our experiments.