MoCaNet: Motion Retargeting In-the-Wild via Canonicalization Networks
Authors: Wentao Zhu, Zhuoqian Yang, Ziang Di, Wayne Wu, Yizhou Wang, Chen Change Loy3617-3625
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we first evaluate and compare the motion retargeting performance on both in-the-wild and synthetic data. Then, we investigate the effects of canonicalization on the learned representations. We also conduct an ablation study to examine the effectiveness of each module. |
| Researcher Affiliation | Collaboration | 1 School of Computer Science, Peking University 2 Shanghai AI Laboratory 3 Sense Time Research 4 Southeast University 5 S-Lab, Nanyang Technological University |
| Pseudocode | No | The paper does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present structured steps in a code-like format. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing their code, nor does it provide a direct link to a code repository for the described methodology. |
| Open Datasets | Yes | The Ours model is trained on the synthetic Mixamo training set and the Ours (wild) model is trained on a web-crawled video dataset Solo Dancer (Yang et al. 2020). |
| Dataset Splits | No | The paper mentions a 'training set' and a 'held-out partition' for testing, but does not explicitly specify a validation set or provide detailed train/validation/test split percentages or counts. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running its experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions using 'off-the-shelf human pose estimator' and 'robust 2D pose estimation algorithms' with citations, but does not specify software names with version numbers for dependencies. |
| Experiment Setup | No | The paper states 'We include the details of the neural network and the datasets in the appendix due to space limit', but the main text does not provide concrete hyperparameter values or detailed training configurations. |