Human MotionFormer: Transferring Human Motions with Vision Transformers
Authors: Hongyu Liu, Xintong Han, Chenbin Jin, Lihui Qian, Huawei Wei, Zhe Lin, Faqiang Wang, Haoye Dong, Yibing Song, Jia Xu, Qifeng Chen
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our Human Motion Former sets the new state-of-the-art performance both qualitatively and quantitatively. Project page: https://github.com/Kumapower LIU/ Human-Motion Former |
| Researcher Affiliation | Collaboration | Hongyu Liu1 Xintong Han2 Chengbin Jin2 Lihui Qian2 Huawei Wei3 Zhe Lin2 Faqiang Wang2 Haoye Dong5 Yibing Song4 Jia Xu2 Qifeng Chen1 1Hong Kong University of Science and Technology 2Huya Inc 3Tencent 4AI3 Institute, Fudan University 5 Carnegie Mellon University |
| Pseudocode | Yes | We provide the pseudo-code of the training process in Algorithm 1. ... Algorithm 1 Training Process |
| Open Source Code | Yes | Project page: https://github.com/Kumapower LIU/ Human-Motion Former |
| Open Datasets | Yes | We use the solo dance You Tube videos collected by Huang et al. (2021a) and i Per Liu et al. (2019b) datasets. |
| Dataset Splits | No | The paper mentions training on 'You Tube videos collected by Huang et al. (2021a) and i Per Liu et al. (2019b) datasets' and refers to a 'test set', but it does not explicitly provide percentages, sample counts, or citations to predefined train/validation/test splits used for their own model training and evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like 'Open Pose Cao et al. (2017)' and 'Adam optimizer', but it does not provide specific version numbers for these or other software dependencies, nor for the programming language used. |
| Experiment Setup | Yes | Our model is optimized using Adam optimizer with β1 = 0.0, β2 = 0.99, and initial learning rate of 10 4. We utilize the TTUR strategy Heusel et al. (2017) to train our model. ... The Motion Former is trained for 10 epochs, and the learning rate decays linearly after the 5-th epoch. ... We set the batchsize as 4 |