Constrained Intrinsic Motivation for Reinforcement Learning

Authors: Xiang Zheng, Xingjun Ma, Chao Shen, Cong Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In various Mu Jo Co robotics environments, we empirically show that CIM for RFPT greatly surpasses fifteen IM methods for unsupervised skill discovery in terms of skill diversity, state coverage, and fine-tuning performance.
Researcher Affiliation Academia Xiang Zheng1 , Xingjun Ma2 , Chao Shen3 and Cong Wang1 1City University of Hong Kong 2Fudan University 3Xi an Jiaotong University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/x-zheng16/CIM.
Open Datasets Yes We evaluate our adaptive coefficient τ CIM k for EIM in two navigation tasks (Point Maze UMaze and Ant Maze UMaze) in D4RL [Fu et al., 2020], and four sparse-reward tasks (Sparse Half Cheetah, Sparse Ant, Sparse Humanoid Standup, and Sparse Grid World).
Dataset Splits No The paper does not provide specific dataset split information for validation.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes 4.1 Experimental Setup. We evaluate our intrinsic bonus r CIM I for RFPT tasks on four Gymnasium environments, including two locomotion environments (Ant and Humanoid) and two manipulation environments (Fetch Push and Fetch Slide).