Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Leveraging Demonstrations with Latent Space Priors
Authors: Jonas Gehring, Deepak Gopinath, Jungdam Won, Andreas Krause, Gabriel Synnaeve, Nicolas Usunier
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results confirm that latent space priors provide significant gains in learning speed and final performance. We benchmark our approach on a set of challenging sparse-reward environments with a complex, simulated humanoid, and on offline RL benchmarks for navigation and object manipulation |
| Researcher Affiliation | Collaboration | Jonas Gehring EMAIL Meta AI (FAIR), ETH Zürich Deepak Gopinath EMAIL Jungdam Won EMAIL Meta AI (FAIR), Pittsburgh, PA Andreas Krause EMAIL ETH Zürich Gabriel Synnaeve EMAIL Nicolas Usunier EMAIL Meta AI (FAIR), France |
| Pseudocode | Yes | Algorithm 1 Model Predictive Control with Latent Space Priors |
| Open Source Code | Yes | Videos, source code and pre-trained models available at https://facebookresearch.github.io/latent-space-priors. |
| Open Datasets | Yes | Our demonstration dataset originates from the CMU Graphics Lab Motion Capture Database 3, which we access through the AMASS corpus (Mahmood et al., 2019). We evaluate latent space priors on two offline RL datasets: maze navigation with a quadruped Ant robot and object manipulation with a kitchen robot (Fu et al., 2021). |
| Dataset Splits | No | The paper describes using mini-batches for training (e.g., "mini-batches of 128 trajectories with a window length of 256") and mentions specific D4RL datasets (e.g., "antmaze-medium-diverse-v1") for pretraining and evaluation environments. While D4RL datasets have inherent splits, the paper does not explicitly state exact percentages, sample counts, or specific citations for how these datasets are partitioned into training, validation, or test sets within the text. |
| Hardware Specification | Yes | We train demonstration autoencoders on 2 GPUs and sequence models on a single GPU. For low-level policy training, we use 16 NVidia V100 GPUs and 160 CPUs; training on 10B samples takes approximately 5 days. |
| Software Dependencies | Yes | We implement models and training algorithms in Py Torch (Paszke et al., 2019). |
| Experiment Setup | Yes | Parameter Value Learning rate 0.0001 Batch size 256 Replay buffer size 5M Samples between gradient steps 300 Gradient steps 50 Warmup samples 10000 Initial temperature coefficient 1 Number of actors 5 |