DanceFormer: Music Conditioned 3D Dance Generation with Parametric Motion Transformer
Authors: Buyu Li, Yongchi Zhao, Shi Zhelun, Lu Sheng1272-1279
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that the proposed method, even trained by existing datasets, can generate fluent, performative, and music-matched 3D dances that surpass previous works quantitatively and qualitatively. Moreover, the proposed Dance Former, together with the Phantom Dance dataset, are seamlessly compatible with industrial animation software, thus facilitating the adaptation for various downstream applications. |
| Researcher Affiliation | Collaboration | Buyu Li,1 Yongchi Zhao,1 Zhelun Shi,2 Lu Sheng2* 1 Huiye Technology, 2 College of Software, Beihang University |
| Pseudocode | No | The paper describes the model architecture and processes but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | More details of our Phantom Dance dataset and the qualitative comparison with the other datasets can be seen on our project page1. (1https://huiye-tech.github.io/post/danceformer/) |
| Open Datasets | Yes | Furthermore, we propose a large-scale music conditioned 3D dance dataset, called Phantom Dance, that is accurately labeled by experienced animators rather than reconstruction or motion capture. This dataset is named Phantom Dance, and we will make it publicly available to facilitate future research. |
| Dataset Splits | No | Among them 900 pieces of music-dance pairs are used for model training, and the other 100 are split into the test set. (The paper mentions a "validation set" was used for a user study, but does not provide specific details on its size or how it was split from the main dataset for training/testing purposes.) |
| Hardware Specification | Yes | The Dance Former is endto-end trained using 4 TITAN Xp GPUs with a batch size of 8 on each GPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and Kochanek-Bartels splines but does not specify version numbers for any software dependencies like programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | We use the Adam optimizer with betas {0.5, 0.999} and a learning rate of 0.0002. The learning rate drops to 2e 5, 2e 6 after 100k, 200k steps. The model is trained with 300k steps for AIST++ and 400k steps for Phantom Dance. The dimension of features in Dance Former is 256 unless otherwise specified. |