Bringing Motion Taxonomies to Continuous Domains via GPLVM on Hyperbolic manifolds

Authors: Noémie Jaquier, Leonel Rozo, Miguel González-Duque, Viacheslav Borovitskiy, Tamim Asfour

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our model on three different human motion taxonomies to learn hyperbolic embeddings that faithfully preserve the original graph structure. We show that our model properly encodes unseen data from existing or new taxonomy categories, and outperforms its Euclidean and VAE-based counterparts.
Researcher Affiliation Collaboration 1Karlsruhe Institute of Technology 2Bosch Center for Artificial Intelligence 3University of Copenhagen 4ETH Zurich.
Pseudocode Yes Corresponding algorithms are provided in App. D.
Open Source Code Yes The source code and video accompanying the paper are available at https: //sites.google.com/view/gphlvm/.
Open Datasets Yes For all experiments, we used humans recordings from the KIT Whole-Body Human Motion Database. We use grasp data from subjects 2122, 2123, 2125, 2177. The considered human recordings consist of a human grasping an object on a table, lifting it, and placing it back.
Dataset Splits No The paper does not provide specific train/validation/test dataset splits in terms of exact percentages or absolute sample counts. It describes strategies for evaluating generalization to "unseen poses" and "unseen classes" rather than standard data splits.
Hardware Specification Yes The implementations are fully developed on Python, and the runtime measurements were taken using a standard laptop with 32 GB RAM, Intel Xeon CPU E3-1505M v6 processor, and Ubuntu 20.04 LTS.
Software Dependencies No The paper mentions "fully developed on Python" but does not specify version numbers for Python or any other key software libraries or frameworks used in the experiments (e.g., PyTorch, TensorFlow, specific solvers).
Experiment Setup Yes Table 7 reports the hyperparameters used for the experiments described in 5. We used the hyperbolic SE kernels of 3.1 for the GPHLVMs, and the classical SE kernel for the Euclidean models. For training the back-constrained GPHLVM and GPLVM, we used a Gamma prior Gamma(α, β) with shape α and rate β on the lengthscale κ of the kernels.