On the Calibration of Human Pose Estimation

Authors: Kerui Gu, Rongyu Chen, Xuanlong Yu, Angela Yao

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate pose estimation tasks on three benchmarks: MSCOCO (Lin et al., 2014), MPII (Andriluka et al., 2014), and MSCOCOWhole Body (Jin et al., 2020). For the downstream tasks, we evaluate the 3D fitting task on 3DPW (Von Marcard et al., 2018). ... Results in Tab. 2 show that our simple yet effective method gives improvements across varying backbones, learning pipelines, and scoring functions.
Researcher Affiliation Academia 1School of Computing, National University of Singapore 2U2IS, ENSTA Paris, IP Paris.
Pseudocode Yes Algorithm 1 CCNet Pseudocode, Py Torch-like
Open Source Code No The project page is at https://comp.nus.edu.sg/ keruigu/calibrate pose/ project.html. While a project page is provided, it is not an explicit statement of source code release for the methodology described, nor is it a direct link to a code repository.
Open Datasets Yes We evaluate pose estimation tasks on three benchmarks: MSCOCO (Lin et al., 2014), MPII (Andriluka et al., 2014), and MSCOCO-Whole Body (Jin et al., 2020).
Dataset Splits Yes MSCOCO consists of 250k person instances annotated with 17 keypoints. We evaluate the model with m AP over the standard 10 OKS thresholds. We also evaluate on MPII with the Percentage of Correct Keypoint (PCK) and on MSCOCO-Whole Body, which includes face and hand keypoints. We evaluate pose estimation tasks on three benchmarks: MSCOCO (Lin et al., 2014), MPII (Andriluka et al., 2014), and MSCOCO-Whole Body (Jin et al., 2020).
Hardware Specification No The paper does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used to conduct the experiments.
Software Dependencies No The paper mentions 'Py Torch-like' pseudocode and the 'Adam (Kingma & Ba, 2014) optimizer' but does not specify version numbers for any software libraries or dependencies.
Experiment Setup Yes The initial learning rate is 1e 3, multiplied by 0.1 in the 9K-th step, and results are reported for 12K steps.