Universal Humanoid Motion Representations for Physics-Based Control

Authors: Zhengyi Luo, Jinkun Cao, Josh Merel, Alexander Winkler, Jing Huang, Kris M. Kitani, Weipeng Xu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Table 1: Motion imitation result (*data cleaning) on AMASS train and test (11313 and 138 sequences). ... Table 2: Quantitative results on VR Controller tracking. We report result on AMASS and real-world dataset. ... Table 3: Ablation on various strategies of learning the motion representation. ... Figure 4: Training curves for each one of the generative tasks.
Researcher Affiliation Collaboration 1Reality Labs Research, Meta; 2Carnegie Mellon University
Pseudocode Yes Algo 1: Learn PULSE and Downstream Tasks
Open Source Code No All code and models will be released for research purposes.
Open Datasets Yes For training PHC+, PULSE, and VR controller policy, we use the cleaned AMASS training set. ... CMU Mo Cap (CMU, 2002) ... Quest Sim (Winkler et al., 2022)
Dataset Splits No The paper refers to training and testing sets, but does not explicitly define a separate validation set or its split details.
Hardware Specification No The paper mentions 'Simulation is conducted at Isaac Gym', but does not specify any particular hardware components like GPU models, CPU types, or memory details.
Software Dependencies No The paper mentions software like 'Isaac Gym' and specific optimization algorithms and activation functions, but does not provide version numbers for any software dependencies or libraries.
Experiment Setup Yes Simulation is conducted at Isaac Gym (Makoviychuk et al., 2021), where the policy is run at 30 Hz and the simulation at 60 Hz. ... Table 4: Hyperparameters for PHC+ and Pulse. ... PHC+ Batch Size 3072 Learning Rate 2 10^-5 σ 0.05 γ 0.99 ϵ 0.2 ... PULSE Batch Size 3072 Learning Rate 5 10^-4 α 0.005 β 0.01 Latent size 32