NeMF: Neural Motion Fields for Kinematic Animation
Authors: Chengan He, Jun Saito, James Zachary, Holly Rushmeier, Yi Zhou
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments. We train our model on the AMASS dataset [27] for most of the experiments. After processing, we have roughly 20 hours of human motion sequences at 30 fps for training and testing. We additionally train our model for the reconstruction experiments on a quadruped motion dataset [47], which contains 30 minutes of dog motion capture at 60 fps. |
| Researcher Affiliation | Collaboration | Chengan He Yale University chengan.he@yale.edu Jun Saito Adobe Research jsaito@adobe.com James Zachary Adobe Research zachary@adobe.com Holly Rushmeier Yale University holly.rushmeier@yale.edu Yi Zhou Adobe Research yizho@adobe.com |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] |
| Open Datasets | Yes | We train our model on the AMASS dataset [27] for most of the experiments. ... We additionally train our model for the reconstruction experiments on a quadruped motion dataset [47]... We also use AIST++ [23] to test motion in-betweening. |
| Dataset Splits | No | After processing, we have roughly 20 hours of human motion sequences at 30 fps for training and testing. The paper mentions training and testing but does not explicitly provide percentages or specific details for dataset splits including validation. |
| Hardware Specification | No | The paper states 'See supplemental' for compute resources but does not provide specific hardware details in the main text. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers in the main text. |
| Experiment Setup | Yes | Similar to Ne RF [32], we train an MLP with positional encoding of t to fit a given motion sequence. The reconstruction loss is finally expressed as a weighted sum of the above terms with weighting factors λrot, λori, and λpos. We set L = 7 throughout our experiments to balance the trade-off. loss function L = Lrec + λKLLKL with the weight λKL. |