Human Action Transfer Based on 3D Model Reconstruction

Authors: Shanyan Guan, Shuo Wen, Dexin Yang, Bingbing Ni, Wendong Zhang, Jun Tang, Xiaokang Yang8352-8359

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on Human3.6M and Human Eva-I to evaluate the performance of pose generator. Both qualitative and quantitative results show that our method outperforms methods based on generation method in 2D.
Researcher Affiliation Academia 1Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 2SJTU-UCLA Joint Research Center on Machine Perception and Inference
Pseudocode No The paper describes the methods in narrative text and figures, but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating that its source code is publicly available.
Open Datasets Yes We perform experiments on Human3.6M and Human Eva-I to evaluate the performance of pose generator. Human3.6M: Large scale datasets and predictive methods for 3d human sensing in natural environments. HumanEva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion.
Dataset Splits No The paper specifies training and test sets (e.g., 'videos of five people (3 males and 2 females) as training set and videos of two people (1 male and 1 female) as test set'), but does not explicitly mention or detail a separate validation set split.
Hardware Specification Yes training typically took 15 hours on a GPU (TITAN X).
Software Dependencies No Our implementation is based on Pytorch(Paszke et al. 2017). We use Adam optimizer(Kingma and Ba 2014) with default setting in Pytorch. While specific software is mentioned, version numbers for Pytorch or Adam are not provided, which is required for reproducible software dependencies.
Experiment Setup Yes During training, λ2d is set to 10, and λθ is set to 1. We use Adam optimizer(Kingma and Ba 2014) with default setting in Pytorch. For shape generator, we freeze it during training. For pose generator, we use initial learning rate of 1 × 10−4 and decrease it by 0.1 per 3 epoches. Batchsize is set to 64.