Learning to Sit: Synthesizing Human-Chair Interactions via Hierarchical Control
Authors: Yu-Wei Chao, Jimei Yang, Weifeng Chen, Jia Deng5887-5895
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally demonstrate the strength of our approach over different non-hierarchical and hierarchical baselines. We adopt two different metrics to quantitatively evaluate the main task: (1) success rate and (2) minimum distance. We declare a success whenever the pelvis of the humanoid has been continuously in contact with the seat surface for 3.0 seconds. We report the success rate over 10,000 trials by spawning the humanoid at random locations. |
| Researcher Affiliation | Collaboration | 1NVIDIA 2Adobe Research 3University of Michigan, Ann Arbor 4Princeton University |
| Pseudocode | No | The paper describes the algorithms used but does not provide specific pseudocode or algorithm blocks. |
| Open Source Code | No | The paper refers to open-source tools used in the research (Open AI Roboschool and Baselines) but does not state that the code for the methodology described in this paper is publicly available. |
| Open Datasets | Yes | The mocap references for each subtask are collected from the CMU Graphics Lab Motion Capture Database 1. We extract relevant motion segments and retarget the motion to our humanoid model. [...] We use a randomly selected chair model from Shape Net (Chang et al. 2015). |
| Dataset Splits | No | The paper describes experimental protocols (e.g., '10,000 trials by spawning the humanoid at random locations') and curriculum learning stages for training, but does not provide specific data splits (e.g., percentages or counts) for training, validation, and testing of a static dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Bullet Physics SDK (Coumans and Bai 2016 2019)', 'Open AI Roboschool (Schulman et al. 2017)', and 'Open AI Baselines (Dhariwal et al. 2017)' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The subtask and meta controller run at 60 Hz and 2Hz respectively, and the physics simulation runs at 240 Hz. [...] We empirically set ωp = 0.5, ωv = 0.05, αp = 1, and αv = 10. [...] To facilitate training, we propose a multi-stage training strategy inspired by curriculum learning (Zaremba and Sutskever 2014). As illustrated in Fig. 4, we begin by only spawning the humanoid on the front side of the chair (Zone 1). Once trained, we change the initial position to the lateral sides (Zone 2) and continue the training. Finally, we train the humanoid to start from the rear side (Zone 3). |