BiHMP-GAN: Bidirectional 3D Human Motion Prediction GAN

Authors: Jogendra Nath Kundu, Maharshi Gor, R. Venkatesh Babu8553-8560

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we describe experimental details of Bi HMPGAN along with analysis of both qualitative and quantitative results on two publicly available datasets; viz. a) Human 3.6M (Ionescu et al. 2014) and CMU MOCAP. The full pipeline of Bi HMP-GAN is implemented in tensorflow with ADAM optimizer. We use a batch size of 32
Researcher Affiliation Academia Jogendra Nath Kundu, Maharshi Gor, R. Venkatesh Babu Video Analytics Lab, Department of Computational and Data Sciences Indian Institute of Science, Bangalore, India. jogendrak@iisc.ac.in, maharshigor18@gmail.com, venky@iisc.ac.in
Pseudocode Yes Algorithm 1: Training algorithm for Bi HMP-GAN, with explicit enforcement of direct content loss.
Open Source Code No The paper mentions using 'publicly available implementation' for HP-GAN, but does not provide concrete access to its own source code for Bi HMP-GAN.
Open Datasets Yes In this section we describe experimental details of Bi HMPGAN along with analysis of both qualitative and quantitative results on two publicly available datasets; viz. a) Human 3.6M (Ionescu et al. 2014) and CMU MOCAP.
Dataset Splits No The paper mentions following data selection criteria from a previous work but does not explicitly state the training, validation, and test dataset splits with percentages or counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'implemented in tensorflow' and 'ADAM optimizer' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We use a batch size of 32 with learning rate set at 0.00005. Single layer LSTM (Chung et al. 2014) with 512 hidden units is incorporated as a recurrent architecture for both sequence encoder, decoder and bidirectional discriminator network. Following previous motion prediction works (Li et al. 2018; Martinez, Black, and Romero 2017) the length of intrinsic past pose sequence is set to 50, i.e. 2 seconds of skeleton motion at 25 fps setting. Considering fair evaluation on long-term prediction, the length of predicted motion sequence is set to 25.