Joint Human Pose Estimation and Instance Segmentation with PosePlusSeg

Authors: Niaz Ahmad, Jawad Khan, Jeremy Yuhyun Kim, Youngmoon Lee69-76

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments using the COCO challenging dataset demonstrate that Pose Plus Seg copes better with challenging scenarios, like occlusions, entangled limbs, and overlapped people. Pose Plus Seg outperforms state-of-the-art detection-based approaches achieving a 0.728 m AP for human pose estimation and a 0.445 m AP for instance segmentation. We evaluate the performance of Pose Plus Seg model on the standard COCO keypoint dataset. Our model is trained end-to-end using the COCOPersons training set. Experiments and ablation studies are conducted on the COCO test and minival set.
Researcher Affiliation Academia Niaz Ahmad, Jawad Khan, Jeremy Yuhyun Kim, Youngmoon Lee Hanyang University, Ansan, South Korea {niazahamd89, jkhanbk1, yuhyunkim, youngmoonlee}@hanyang.ac.kr
Pseudocode No The paper refers to various algorithms by name (e.g., 'BHM algorithm', 'pose generator algorithm') but does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes Code has been made available at https://github.com/Raise Lab/Pose Plus Seg.
Open Datasets Yes Experiments using the COCO challenging dataset demonstrate that Pose Plus Seg copes better with challenging scenarios, like occlusions, entangled limbs, and overlapped people. Pose Plus Seg outperforms state-of-the-art detection-based approaches achieving a 0.728 m AP for human pose estimation and a 0.445 m AP for instance segmentation. We evaluate the performance of Pose Plus Seg model on the standard COCO keypoint dataset. Our model is trained end-to-end using the COCOPersons training set. Experiments and ablation studies are conducted on the COCO test and minival set. We evaluate the performance of Pose Plus Seg using the COCO dataset (Lin et al. 2014).
Dataset Splits Yes Experiments and ablation studies are conducted on the COCO test and minival set.
Hardware Specification Yes The hyperparameters for training are: learning rate = 0.1 e 4, image size = 401 401, and batch size = 2 implemented on one NVIDIA Ge Force GTX 1080 Ti. Pose Plus Seg RN152 Pose 28ms (34fps) RTX Pose Plus Seg RN152 Inst. Seg. 29ms (32fps) RTX Pose Plus Seg RN152 Pose&Seg. 34ms (28fps) RTX
Software Dependencies Yes We conduct synchronous training for 500 epochs with stochastic gradient descent using Tensor Flow 1.13.
Experiment Setup Yes The hyperparameters for training are: learning rate = 0.1 e 4, image size = 401 401, and batch size = 2 implemented on one NVIDIA Ge Force GTX 1080 Ti. We conduct synchronous training for 500 epochs with stochastic gradient descent using Tensor Flow 1.13.