A Unified Masked Autoencoder with Patchified Skeletons for Motion Synthesis
Authors: Esteve Valls MascarĂ³, Hyemin Ahn, Dongheui Lee
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our model successfully forecasts human motion on the Human3.6M dataset while achieving state-of-the-art results in motion inbetweening on the La FAN1 dataset for long transition periods. |
| Researcher Affiliation | Academia | Esteve Valls Mascar o 1, Hyemin Ahn 2, Dongheui Lee1,3, 1 Technische Universit at Wien (TUWien) 2 Ulsan National Institute of Science & Technology (UNIST) 3 German Aerospace Center (DLR) |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions a project page (https://evm7.github.io/UNIMASKM-page/) for additional qualitative results but does not explicitly state that the source code for the methodology is being released or provide a direct link to a code repository. |
| Open Datasets | Yes | Human motion forecasting has been mainly addressed in Human3.6M dataset (Ionescu et al. 2014). This dataset includes 3.6 million 3D poses of humans performing 15 daily activities. Human motion inbetweening is evaluated in La FAN1 dataset (Harvey et al. 2020a). This dataset contains 496,672 motions sampled at 30Hz and captured in a MOCAP studio. |
| Dataset Splits | No | The paper refers to "test subject S5" for Human3.6M and describes the input-output format for motion prediction tasks, but it does not explicitly provide percentages, sample counts, or specific methodology for training, validation, and test splits across the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or specific cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | No | While the paper mentions training aspects like curriculum learning and masking probabilities, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings. |