Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On Retrospecting Human Dynamics with Attention
Authors: Minjing Dong, Chang Xu
IJCAI 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed algorithm on the largest and most challenging Human 3.6M dataset in the ๏ฌeld. Experimental results demonstrate the necessity of investigating motion prediction in a self audit manner and the effectiveness of the proposed algorithm in both short term and long term predictions. |
| Researcher Affiliation | Academia | Minjing Dong and Chang Xu School of Computer Science, Faculty of Engineering, University of Sydney, Australia EMAIL, EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement or link indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | In experiments, we followed previous works [Fragkiadaki et al., 2015; Martinez et al., 2017], and focusd on the Human 3.6M dataset [Ionescu et al., 2014], which is currently the largest human motion dataset for 3D mocap data analysis. |
| Dataset Splits | No | The paper states, 'we tested on subject 5 while the rest six subjects were used for training.' However, it does not explicitly provide details about a distinct validation set split (e.g., percentages, counts, or a clear method for splitting training data into train/validation). |
| Hardware Specification | Yes | Our network was implemented using Tensor Flow, and it takes 92ms per step on an NVIDIA Titan GPU. |
| Software Dependencies | No | The paper states 'Our network was implemented using Tensor Flow,' but does not provide a specific version number for TensorFlow or other software dependencies. |
| Experiment Setup | Yes | The hyper parameter ฮฑ in Eq. 7 is set to 0.5. ... We adopted a single gated recurrent unit with 1024 units. Momentum method was used to optimize the proposed algorithm and the learning rate is set to 0.005. The batch size is set to 16, and gradient clipping to maximum L2-norm of 5. |