Contact-aware Human Motion Forecasting

Authors: Wei Mao, miaomiao Liu, Richard I Hartley, Mathieu Salzmann

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach outperforms the state-of-the-art human motion forecasting and human synthesis methods on both synthetic and real datasets.
Researcher Affiliation Collaboration 1Australian National University; 2CVLab, EPFL; 3Clear Space, Switzerland
Pseudocode No The paper describes its methods in detail but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/wei-mao-2019/Cont Aware Motion Pred.
Open Datasets Yes We evaluate our method on two datasets, GTA-IM [6] and PROX [11].
Dataset Splits No The paper specifies training and testing sets, for example: "We use 4 of the scenes as our training set... and the remaining 3 as our test set." However, it does not explicitly mention a validation split or its size/proportion.
Hardware Specification Yes The training of each network takes about 12 hours on a 24GB NVIDIA RTX3090Ti GPU
Software Dependencies No Our models are implemented in Pytorch [24] and trained using the ADAM [15] optimizer. The paper names the software but does not specify version numbers for PyTorch, ADAM, or other relevant libraries.
Experiment Setup Yes Both our contact prediction network and motion forecasting one are trained for 50 epochs with learning rates of 0.0005 and 0.001, respectively. ... For both datasets, the normalizing factor σ, the number of DCT coefficients L and the contact threshold ϵ are set to 0.2, 20 and 0.32, respectively. For the motion forecasting network, the loss weights (λ1, λ2, λ3) are set to (1.0, 1.0, 0.1) for both datasets.