CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning

Authors: Cédric Colas, Pierre Fournier, Mohamed Chetouani, Olivier Sigaud, Pierre-Yves Oudeyer

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted in a new modular-goal robotic environment show the resulting developmental self-organization of a learning curriculum, and demonstrate properties of robustness to distracting goals, forgetting and changes in body properties.
Researcher Affiliation Academia 1Flowers Team, Inria and Ensta Paris Tech, FR. 2ISIR, Sorbonne Univ., Paris, FR.
Pseudocode No The detailed algorithm is given in the supplementary document.
Open Source Code Yes Links. The environment, code and video of the CURIOUS agent are made available at https://github.com/ flowersteam/curious.
Open Datasets No The paper describes creating a "new simulated environment adapted from the Open AI Gym suite" (Modular Goal Fetch Arm) rather than using a pre-existing publicly available dataset with specific access information.
Dataset Splits No The paper describes
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the simulations or train the models.
Software Dependencies No The paper mentions "Open AI Gym suite", "DDPG", "TD3", and "DQN" but does not specify version numbers for these software components or libraries.
Experiment Setup Yes The agent controls the position of its gripper and the gripper opening (4D). ... p = 0.8 ... peval = 0.1 ... ϵ-greedy strategy for exploration. ... LPMi(n(i) eval) = CMi(n(i) eval) - CMi(n(i) eval l). ... precision parameter ϵreach.