Temporal Constrained Feasible Subspace Learning for Human Pose Forecasting
Authors: Gaoang Wang, Mingli Song
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed method on large-scale benchmarks, including Human3.6M, AMASS, and 3DPW. State-of-the-art performance has been achieved with the temporal constrained feasible solutions. |
| Researcher Affiliation | Academia | 1Zhejiang University-University of Illinois Urbana-Champaign Institute, Zhejiang University, China 2College of Computer Science and Technology, Zhejiang University, China |
| Pseudocode | No | The paper describes the proposed method conceptually and mathematically but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | Human3.6M [Ionescu et al., 2013] It is a large-scale dataset consisting of 3.6 million 3D human poses and corresponding images. ... AMASS [Mahmood et al., 2019] The Archive of Motion Capture as Surface Shapes (AMASS) dataset has been recently proposed with 18 existing Mo Cap datasets. ... 3DPW [von Marcard et al., 2018] The dataset consists of in-the-wild video sequences and 3D human poses captured by a moving camera. |
| Dataset Splits | Yes | Following the current literature [Mao et al., 2020; Mao et al., 2019; Martinez et al., 2017], we use subject 11 (S11) for validation, the subject 5 (S5) for testing, and all the rest of the subjects for training. ... Following [Mao et al., 2020; Sofianos et al., 2021], we take 13 datasets from AMASS in the experiment, with 8 datasets for training, 4 for validation and 1 for testing. |
| Hardware Specification | Yes | One NVIDIA RTX 3090 GPU is used for training. |
| Software Dependencies | No | The paper states: "We use Pytorch for training the neural networks and use ADAM [Kingma and Ba, 2014] as the optimizer." However, it does not provide specific version numbers for PyTorch, ADAM, or any other software components. |
| Experiment Setup | Yes | The learning rate is set to 0.01 and decayed by a factor of 0.1 every 5 epochs after the 20th epoch. The batch size is set to 256. The maximum epoch is set to 50. The constraint L is set to 50. |