Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

The Value-Improvement Path: Towards Better Representations for Reinforcement Learning

Authors: Will Dabney, AndrΓ© Barreto, Mark Rowland, Robert Dadashi, John Quan, Marc G. Bellemare, David Silver7160-7168

AAAI 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To test our hypothesis empirically, we augmented a standard deep RL agent with an auxiliary task of learning the value-improvement path. In a study of Atari 2600 games, the augmented agent achieved approximately double the mean and median performance of the baseline agent. Our goal in this section is to empirically study the effect of the previously discussed auxiliary tasks on the quality of the learned representation. For these experiments, we use the Atari-57 benchmark from the Arcade Learning Environment (Bellemare et al. 2013, ALE).
Researcher Affiliation Industry 1 Deep Mind 2 Google Research EMAIL
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes For these experiments, we use the Atari-57 benchmark from the Arcade Learning Environment (Bellemare et al. 2013, ALE).
Dataset Splits No The paper mentions evaluating on a 'held-out set of transitions' but does not specify exact percentages or counts for train/validation/test splits, nor does it cite a predefined standard split for reproduction.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or cloud computing resources used for running the experiments.
Software Dependencies No The paper mentions algorithms like 'Double DQN' and 'Rainbow agent' but does not provide specific software library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x) or other ancillary software dependencies with versions.
Experiment Setup Yes While training each agent, for 200 million environment frames, we saved the current network every 2 million frames. Each auxiliary task is trained as a linear function of the last hidden layer of the neural network used by Double DQN. We generated the cumulants for Cumulant Values and Cumulant Policies using a random network (details in Appendix C).