Implicit Generative Modeling for Efficient Exploration
Authors: Neale Ratzlaff, Qinxun Bai, Li Fuxin, Wei Xu
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we compare our approach with several state-of-the-art intrinsic reward-based exploration approaches, including two recent approaches that also leverage the uncertainty in dynamic models. Experiments show that our implementation consistently outperforms competing methods regarding data efficiency in exploration. In this section we conduct experiments to compare our approach to the existing state-of-the-art in efficient exploration with intrinsic rewards to illustrate the following: An agent with an implicit posterior over dynamic models explores more effectively and efficiently than agents using a single model or a static ensemble. Agents seeking external reward find better policies when initialized from powerful exploration policies. Our ablation studies shows that the better the exploration policy as an initialization, the better the downstream task policy can learn. |
| Researcher Affiliation | Collaboration | 1Department of Electrical Engineering and Computer Science, Oregon State University, Corvallis, Oregon, USA 2Horizon Robotics, Cupertino, California, USA. |
| Pseudocode | Yes | Algorithm 1 Exploration with an Implicit Distribution |
| Open Source Code | No | The paper mentions: "We use the codebase of MAX as a basis and implement Ours, ICM, and Disagreement intrinsic rewards under the same framework." (footnote 1). This indicates they built upon existing code but does not provide a direct statement or link for *their* specific implementation. |
| Open Datasets | Yes | We evaluate on three challenging exploration tasks and compare with three state-of-the-art intrinsic reward-based methods, two of which are also based on uncertainty in dynamic models. The paper utilizes standard reinforcement learning environments (e.g., NChain, Acrobot, Ant Maze, Robotic Manipulation, Half Cheetah) which are publicly accessible for research purposes as environments/simulators. |
| Dataset Splits | No | The paper does not provide specific percentages or counts for training/validation/test dataset splits. It describes the training process in terms of environment steps and episodes, and how the internal dynamic model is trained, but not overall data splits for the experimental evaluation. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU models, GPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions "SAC v1 (Haarnoja et al., 2018)" as the model-free RL algorithm and refers to using "the codebase of MAX as a basis" but does not specify version numbers for these or any other software libraries/dependencies. |
| Experiment Setup | No | The paper states: "shared hyper-parameters follow the MAX default settings. For all experiments except for the Chain environment, we use SAC v1 (Haarnoja et al., 2018) as the model-free RL algorithm used to train the exploration policies." It also mentions "fix the number of models we sample from the generator at m = 32" and training for "10K environment steps". However, it defers explicit hyperparameter values to external sources ("MAX default settings", "recommended settings for Half Cheetah given in the original SAC v1 method") rather than detailing them directly in the paper. |