Universal Agent for Disentangling Environments and Tasks

Authors: Jiayuan Mao, Honghua Dong, Joseph J. Lim

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The extensive results in simulators indicate that our method can efficiently separate and learn two independent units, and also adapt to a new task more efficiently than the state-of-the-art methods.
Researcher Affiliation Academia Jiayuan Mao & Honghua Dong The Institute for Theoretical Computer Science Institute for Interdisciplinary Information Sciences Tsinghua University Beijing, China {mjy14,dhh14}@mails.tsinghua.edu.cn Joseph J. Lim Department of Computer Science University of Southern California Los Angeles, USA limjj@usc.edu
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present structured steps in a code-like format.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the methodology described, nor does it provide any links to a code repository.
Open Datasets Yes Lava world (Figure 3) is a famous 2D maze in reinforcement learning. and We tested our framework on a subset of Atari 2600 games, a challenging RL testbed that presents agents with a high-dimensional visual input (210 × 160 RGB frames) and a diverse set of tasks which are even difficult for human players.
Dataset Splits No The paper does not explicitly specify training, validation, and test dataset splits (e.g., percentages, sample counts, or references to predefined splits) that would be needed for reproduction.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory amounts) used to run the experiments.
Software Dependencies No The paper mentions algorithms and network architectures (e.g., 'batched A3C', 'CNN', 'MLP', 'PPO') but does not provide specific software names with version numbers for libraries, frameworks, or programming languages (e.g., 'PyTorch 1.9', 'Python 3.8', 'CUDA 11.1') needed to replicate the experiments.
Experiment Setup Yes The discount factor is chosen as γ = 0.99 for all our experiments. and To preserve historical information, we concatenate four consecutive frames in channel dimension as input to our network (as state s). and In particular, the max distance between starting state and final state is initialized as one at the beginning of the training process. After every K iterations of updating, the max distance is increased by one.