MotionGPT: Finetuned LLMs Are General-Purpose Motion Generators
Authors: Yaqi Zhang, Di Huang, Bin Liu, Shixiang Tang, Yan Lu, Lu Chen, Lei Bai, Qi Chu, Nenghai Yu, Wanli Ouyang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on the Human ML3D (Guo et al. 2022a) and KIT-ML (Plappert, Mandery, and Asfour 2016) datasets, demonstrating Motion GPT has a strong ability for motion generation with multiple control conditions. Remarkably, Motion GPT achieves this with a significantly small set of training parameters (33 M), and in less training time (about 4 hours, or just 10% of the time taken by other methods). |
| Researcher Affiliation | Collaboration | Yaqi Zhang1,2, Di Huang3, Bin Liu1,2*, Shixiang Tang3, Yan Lu3, Lu Chen4, Lei Bai4, Qi Chu1,2, Nenghai Yu1,2, Wanli Ouyang4 1School of Cyber Science and Technology, University of Science and Technology of China 2CAS Key Laboratory of Electromagnetic Space Information 3The University of Sydney 4Shanghai AI Laboratory |
| Pseudocode | No | The paper does not contain explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format. It provides mathematical formulas and general instruction templates, but these are not pseudocode. |
| Open Source Code | Yes | Visit our webpage at https://qiqiapink.github.io/Motion GPT/. |
| Open Datasets | Yes | We apply two widely-used datasets, Human ML3D (Guo et al. 2022a) and KIT-ML (Plappert, Mandery, and Asfour 2016) for evaluation. |
| Dataset Splits | Yes | Table 3: Evaluation of text-to-motion generation using different pre-trained LLa MA on Human ML3D validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions software components like 'LLMs' and 'Lo RA adaptation' and 'VQ-VAE' but does not provide specific version numbers for any software or libraries required to replicate the experiment. |
| Experiment Setup | No | The paper states 'More information about datasets, proposed new metrics, and implementation details are included in the supplementary material (Zhang et al. 2023b).' This indicates that specific experimental setup details such as hyperparameters are not present in the main text. |