Human Motion Generation via Cross-Space Constrained Sampling

Authors: Zhongyue Huang, Jingwei Xu, Bingbing Ni

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that the proposed framework successfully generates novel human motion sequences with reasonable visual quality.
Researcher Affiliation Academia Zhongyue Huang, Jingwei Xu and Bingbing Ni Shanghai Jiao Tong University, China {116033910063, xjwxjw, nibingbing}@sjtu.edu.cn
Pseudocode Yes Algorithm 1 Optimization Algorithm
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include a link to a code repository.
Open Datasets Yes KTH Dataset. This dataset [Schuldt et al., 2004]... Human3.6M Dataset. This dataset [Ionescu et al., 2014]
Dataset Splits No The paper specifies training and testing splits (e.g., 'For KTH datasets, we use person 1-15 for training and 16-25 for testing'), but it does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or cloud instance specifications.
Software Dependencies No The paper mentions software components like 'Adam solver', 'Open Pose', and 'Res Net-18' but does not specify their version numbers, which are necessary for reproducible software dependencies.
Experiment Setup Yes All networks were trained using the Adam solver with a leanrning rate as 0.0001 and a batch size of 10. We set λ = γ = 10 and α = β = 1.