Dynamics-regulated kinematic policy for egocentric pose estimation
Authors: Zhengyi Luo, Ryo Hachiuma, Ye Yuan, Kris Kitani
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios. ... 4 Experiments |
| Researcher Affiliation | Academia | Zhengyi Luo1 Ryo Hachiuma 2 Ye Yuan1 Kris Kitani1 1 Carnegie Mellon University 2 Keio University |
| Pseudocode | Yes | Algorithm 1 Learning kinematic policy via supervised learning. ... Algorithm 2 Learning kinematic policy via dynamics-regulated training. |
| Open Source Code | Yes | https://zhengyiluo.github.io/projects/kin_poly/ ... (3a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] |
| Open Datasets | No | As no public dataset contains synchronized ground-truth full-body pose, object pose, and egocentric videos with human-object interactions, we record two egocentric datasets: one inside a Mo Cap studio, another in the real-world. ... The paper does not explicitly state that their recorded datasets are publicly available or provide access information for them. |
| Dataset Splits | No | We use an 80 20 train test data split on this Mo Cap dataset. ... The paper specifies train/test splits but does not explicitly mention a separate validation split or its size. |
| Hardware Specification | Yes | The training process takes about 1 day on a RTX 2080-Ti with 35 CPU threads. |
| Software Dependencies | No | We use the free physics simulator Mu Jo Co [47] and run the simulation at 450 Hz. ... We employ Proximal Policy Optimization (PPO) [41] ... We use an Gated Recurrent Unit (GRU) [7] based network ... optical flow extractor [44] and Res Net [13]. ... off-the-shelf VIO method [16] ... Apple s ARkit [17] ... The paper mentions various software components and cites them, but it does not specify version numbers for any of these dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | We use the free physics simulator Mu Jo Co [47] and run the simulation at 450 Hz. Our learned policy is run every 15 timesteps and assumes all visual inputs are at 30 Hz. The humanoid follows the kinematic and mesh definition of the SMPL model and has 25 bones and 76 Do F. ... At the beginning of each episode, a random fixed length sequence (300 frames) is sampled from the dataset for training. ... We train our method and baselines on the training split (202 sequences) of our Mo Cap dataset. The training process takes about 1 day on a RTX 2080-Ti with 35 CPU threads. |