Learning Basis Representation to Refine 3D Human Pose Estimations
Authors: Chunyu Wang, Haibo Qiu, Alan L. Yuille, Wenjun Zeng8925-8932
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets show that our approach obtains more legitimate poses over the baselines. |
| Researcher Affiliation | Collaboration | Microsoft Research Asia, Beijing, China The Johns Hopkins University, Baltimore, MD 21218, USA |
| Pseudocode | No | No pseudocode or algorithm blocks are present. The methodology is described through text and mathematical formulations. |
| Open Source Code | No | No explicit statement about releasing their own source code or a link to it is provided. |
| Open Datasets | Yes | We evaluate our 3D pose refinement approach on two benchmark datasets: H36M (Ionescu et al. 2014) and MPIINF-3DHP (Mehta et al. 2017). |
| Dataset Splits | No | Following the most common evaluation protocol (Zhou et al. 2017; Pavlakos et al. 2017), we use five subjects (i.e. S1, S5, S6, S7, S8) for training and two subjects (S9, S11) for testing. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are provided. |
| Software Dependencies | No | Only “Pytorch” is mentioned, but no specific version number or other software dependencies with their versions are listed. |
| Experiment Setup | Yes | We learn 1000 bases for all the training poses. |