Geometry-Driven Self-Supervised Method for 3D Human Pose Estimation
Authors: Yang Li, Kan Li, Shuai Jiang, Ziyue Zhang, Congzhentao Huang, Richard Yi Da Xu11442-11449
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our method on two popular 3D human pose datasets, Human3.6M and MPIINF-3DHP. The results show that our method significantly outperforms recent weakly/self-supervised approaches. |
| Researcher Affiliation | Academia | Yang Li,1,2 Kan Li,1 Shuai Jiang,1,2 Ziyue Zhang,2 Congzhentao Huang,2 Richard Yi Da Xu2 1School of Computer Science and Technology, Beijing Institute of Technology, China 2Faculty of Engineering and Information Technology, University of Technology Sydney, Australia {yanglee, likan}@bit.edu.cn, {shuai.jiang-1, ziyue.zhang-2, congzhentao.huang}@student.uts.edu.au, yida.xu@uts.edu.au |
| Pseudocode | No | The paper describes the proposed method and training procedure in detail within the text and using diagrams, but it does not include any formal pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We perform extensive evaluations on two publicly available benchmarks. Human3.6M (H36M) (Ionescu et al. 2013) is one of the largest datasets for 3D human pose estimation... MPI-INF-3DHP (3DHP) (Mehta et al. 2017) is a recently proposed 3D pose dataset... |
| Dataset Splits | Yes | For the H36M dataset, we consider two popular evaluation protocols... we follow the standard protocol with 17-joint subset, use subjects S1, S5, S6, S7, S8 for training and S9, S11 for testing... We use the five chest-height cameras and the provided 17 joints (compatible with H36M) for training, and we use the official test set, which contains 2929 frames from six subjects performing seven actions, for evaluation. |
| Hardware Specification | No | The paper does not specify any particular GPU models, CPU models, or other hardware configurations used for running the experiments. It only mentions the use of 'the deep learning toolbox Pytorch'. |
| Software Dependencies | No | We implement our method using the deep learning toolbox Pytorch. The paper mentions Pytorch but does not provide a specific version number or other software dependencies with versions. |
| Experiment Setup | Yes | First, we pre-train the network using the Lpre-train loss. We use the Adam as the optimizer and train the network for 20 epoches with learning rate 0.001. Next, the network is trained using the LT loss for 300 epoches. The learning rate starts from 0.001 and drops by 0.1 each 100 epoches. |