Invariant Action Effect Model for Reinforcement Learning

Authors: Zheng-Mao Zhu, Shengyi Jiang, Yu-Ren Liu, Yang Yu, Kun Zhang9260-9268

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive experiments on two benchmarks, i.e. Grid-World and Atari, show that the representations learned by IAEM preserve the invariance of action effects. Moreover, with the invariant action effect, IAEM can accelerate the learning process by 1.6x, rapidly generalize to new environments by finetuning on a few components, and outperform other dynamicsbased representation methods by 1.4x in limited steps.
Researcher Affiliation Academia 1 National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China 2 Peng Cheng Laboratory, Shenzhen, Guangdong, China 3Department of Philosophy, Carnegie Mellon University, Pittsburgh, PA, United States
Pseudocode Yes Algorithm 1: Invariant Action Effect Model (IAEM)
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes We evaluate the performance, sample efficiency, and the generalization ability of IAEM on two widely-used benchmarks: Grid-World and Atari games.
Dataset Splits No The paper does not provide specific dataset split information (e.g., exact percentages, sample counts, or citations to predefined splits) for training, validation, and test sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No For the network architecture in Atari, we use a state-of-the-art DQN baseline dopamine (Castro et al. 2018). (No specific version number for Dopamine or other key libraries is mentioned.)
Experiment Setup Yes Implementation details and hyperparameter values of IAEM are summarized in the appendix A.