On Retrospecting Human Dynamics with Attention

Authors: Minjing Dong, Chang Xu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed algorithm on the largest and most challenging Human 3.6M dataset in the field. Experimental results demonstrate the necessity of investigating motion prediction in a self audit manner and the effectiveness of the proposed algorithm in both short term and long term predictions.
Researcher Affiliation Academia Minjing Dong and Chang Xu School of Computer Science, Faculty of Engineering, University of Sydney, Australia mdon0736@uni.sydney.edu.au, c.xu@sydney.edu.au
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement or link indicating that the source code for their methodology is publicly available.
Open Datasets Yes In experiments, we followed previous works [Fragkiadaki et al., 2015; Martinez et al., 2017], and focusd on the Human 3.6M dataset [Ionescu et al., 2014], which is currently the largest human motion dataset for 3D mocap data analysis.
Dataset Splits No The paper states, 'we tested on subject 5 while the rest six subjects were used for training.' However, it does not explicitly provide details about a distinct validation set split (e.g., percentages, counts, or a clear method for splitting training data into train/validation).
Hardware Specification Yes Our network was implemented using Tensor Flow, and it takes 92ms per step on an NVIDIA Titan GPU.
Software Dependencies No The paper states 'Our network was implemented using Tensor Flow,' but does not provide a specific version number for TensorFlow or other software dependencies.
Experiment Setup Yes The hyper parameter α in Eq. 7 is set to 0.5. ... We adopted a single gated recurrent unit with 1024 units. Momentum method was used to optimize the proposed algorithm and the learning rate is set to 0.005. The batch size is set to 16, and gradient clipping to maximum L2-norm of 5.