Skinned Motion Retargeting with Dense Geometric Interaction Perception

Authors: Zijie Ye, Jia-Wei Liu, Jia Jia, Shikun Sun, Mike Zheng Shou

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the public Mixamo dataset and our newly-collected Scan Ret dataset demonstrate that Mesh Ret achieves state-of-the-art performance. Code available at https://github.com/abcyzj/Mesh Ret.
Researcher Affiliation Academia 1 Department of Computer Science and Technology, BNRist, Tsinghua University 2 Key Laboratory of Pervasive Computing, Ministry of Education 3 Show Lab, National University of Singapore
Pseudocode Yes Further details can be found in Algorithm 1.
Open Source Code Yes Code available at https://github.com/abcyzj/Mesh Ret.
Open Datasets Yes We trained and evaluated our method using the Mixamo dataset [2] and the newly curated Scan Ret dataset. ... We downloaded 3,675 motion clips performed by 13 cartoon characters from the Mixamo dataset contains, while the Scan Ret dataset consists of 8,298 clips executed by 100 human actors. ... Adobe. Mixamo. https://www.mixamo.com/. 2018.
Dataset Splits No The training set comprises 90% of the motion clips from both datasets, involving nine characters from Mixamo and 90 from Scan Ret. ... Details regarding the train/test split for specific motion sequences and characters are provided in the code. The paper specifies a 90% training split and discusses test splits, but it does not explicitly define a separate validation set with its size or percentage.
Hardware Specification Yes We implemented our network using Py Torch [23], running on a machine equipped with an NVIDIA RTX A6000 GPU and an AMD EPYC 9654 CPU.
Software Dependencies No We implemented our network using Py Torch [23]... The paper mentions PyTorch but does not provide a specific version number, nor does it list other software dependencies with their versions.
Experiment Setup Yes The hyper-parameters λrec, λdmi, λadv, λef, and L were empirically set to 1.0, 5.0, 1.0, 1.0, and 20, respectively. ... We employed the Adam optimizer [13] with a learning rate of 10 4 to optimize our network. The training process required 36 epochs.