Variational Empowerment as Representation Learning for Goal-Conditioned Reinforcement Learning
Authors: Jongwook Choi, Archit Sharma, Honglak Lee, Sergey Levine, Shixiang Shane Gu
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through principled mathematical derivations and careful experimental studies, our work lays a novel foundation from which to evaluate, analyze, and develop representation learning techniques in goal-based RL. |
| Researcher Affiliation | Collaboration | Jongwook Choi 1 Archit Sharma 2 Honglak Lee 1 3 Sergey Levine 4 5 Shixiang Shane Gu 4 Work done while an intern at Google. 1University of Michigan 2Stanford University 3LG AI Research 4Google Research 5University of California, Berkeley. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate the performance of several variants of VGCRL on standard locomotion tasks (Brockman et al., 2016). |
| Dataset Splits | No | The paper mentions evaluating performance on standard locomotion tasks but does not specify concrete training, validation, or test data splits. |
| Hardware Specification | No | The paper mentions using Mujoco for simulations but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments or training. |
| Software Dependencies | No | The paper mentions |
| Experiment Setup | Yes | Evaluation of Latent Goal-Reaching Metric on Mu Jo Co control suites, after a total of 10M environment steps of training. |