Dive Deeper Into Integral Pose Regression
Authors: Kerui Gu, Linlin Yang, Angela Yao
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical analysis paired with empirical verification shows that the shrinkage of heatmaps of the integral regression method is the main cause for its lower performance compared to detection. We verify the model experimentally and show that as samples shift from hard to easy, the activation region on the heatmap shrinks for both detection and integral regression methods. |
| Researcher Affiliation | Academia | National University of Singapore, Singapore 2University of Bonn, Germany {keruigu, yangll, ayao}@comp.nus.edu.sg |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to code repositories. |
| Open Datasets | Yes | We then apply the networks on the standard human pose estimation benchmark MSCOCO (Lin et al., 2014) using the same easy, medium and hard splits as Gu et al. (2021). |
| Dataset Splits | Yes | We then apply the networks on the standard human pose estimation benchmark MSCOCO (Lin et al., 2014) using the same easy, medium and hard splits as Gu et al. (2021). The easy samples are defined by 11-17 joints, <10% occlusion, or >128px input. The hard samples are those with either 1-5 joints present, > 50% occlusion, or 32-64px input. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions using 'Pytorch' for autograd, but does not specify any version numbers for Pytorch or any other software dependencies. |
| Experiment Setup | Yes | For both networks, we use a Simple Baseline (SBL) (Xiao et al., 2018) architecture with a Res Net50 (He et al., 2016) backbone. Specifically, we apply a Kullback-Leibler Divergence (KLDiv) loss between the non-normalized heatmap ˆH and a Gaussian heatmap with varying σ. Let us assume wp and wi are weights of the prior and integral losses, respectively. We consider the ratio wp Lp : wi λLi, where λ = 10 2, to set the two to approximately equal magnitudes at the initial epoch. |