Same State, Different Task: Continual Reinforcement Learning without Interference

Authors: Samuel Kessler, Jack Parker-Holder, Philip Ball, Stefan Zohren, Stephen J. Roberts7143-7151

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show in multiple RL environments that existing replay based CL methods fail, while OWL is able to achieve close to optimal performance when training sequentially.
Researcher Affiliation Academia University of Oxford {skessler, jackph, ball, zohren, sjrob}@robots.ox.ac.uk
Pseudocode Yes Algorithm 1: OWL: Training
Open Source Code Yes Code is available at https://github.com/skezle/owl.
Open Datasets Yes Pendulum-v0 environment (Brockman et al. 2016)
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper mentions some general training durations (e.g., 'switching every 20, 000 environment steps', '1M steps') but does not provide specific experimental setup details such as concrete hyperparameter values or optimizer settings in the main text.