Learning 3-D Human Pose Estimation from Catadioptric Videos
Authors: Chenchen Liu, Yongzhi Li, Kangqi Ma, Duo Zhang, Peijun Bao, Yadong Mu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted comprehensive experiments on cross-scenario pose estimation and visualization analysis. The results strongly demonstrate the usefulness of our proposed DBM human poses. |
| Researcher Affiliation | Academia | Peking University {liuchenchen, yongzhili, makq, zhduodyx, peijunbao, myd}@pku.edu.cn |
| Pseudocode | No | The paper describes algorithms and methods but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any links or explicit statements about releasing source code for the described methodology. |
| Open Datasets | Yes | Our intelligently-collected DBM surpasses all others in term of data scale. We next evaluate the quality of epipole estimation, which is crucial for 3-D keypoint refinement. Geometrically, in the mirrored image, the position of the camera is the position of the epipole point, as the case in Figure 3. We manually pick 100 videos that have the camera visible, and then mark the position of the camera as the groundtruth of epipole point. Table 2 summarizes the Euclidean distance between the estimated pole coordinates and the true pole coordinates by vary-ing point pairs involved in the computation. More point pairs are observed to generate more robust estimation. Others about the distribution of the quality of 3-D keypoints in human skeleton prior (such as human skeleton symmetry), and the influence of refinement on the quality of 3-D keypoints are also important. We present more details in the supplemental materials. |
| Dataset Splits | Yes | After all above data filtering operatioins, we divide the train/val/test set according to the ratio of 7:1:2, and get 124243/18114/33295 clips respectively. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU or CPU models used for experiments. |
| Software Dependencies | No | The paper mentions models like FPN and HR-Net but does not provide specific software names with version numbers for reproducibility (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | No | The paper states, "Hyperparameters such as learning rate and dropout ratio are the same as those in [Pavllo et al., 2019]." This indicates that specific hyperparameter values are not provided within this paper, but are referenced from another work. |