Policy Gradients Incorporating the Future

Authors: David Venuto, Elaine Lau, Doina Precup, Ofir Nachum

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now provide a wide array of empirical evaluations of our method, PGIF, encompassing tasks with delayed rewards, sparse rewards, online access to the environment, offline access to the environment, and partial observability. In the appendix, we include further demonstrations of PGIF applied to the challenging Ant Maze environment (Sec. H) with substantial performance improvements, online RL with full observability (Sec. E) with improvements over a SAC baseline, and numerous ablation analyses (Sec. F, I) identifying the components of PGIF that are responsible for performance.
Researcher Affiliation Collaboration David Venuto1,2, Elaine Lau2, Doina Precup1,2,3, Ofir Nachum4 1Mila, 2Mc Gill University, 3Deep Mind, 4Google Brain david.venuto@mail.mcgill.ca
Pseudocode Yes Algorithm 1 PGIF Algorithm with State-Action Value Function Estimation
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository in the main text.
Open Datasets Yes For these offline Mu Jo Co tasks, we examine D4RL datasets classified as medium (where the training of the agent is ended after achieving a 'medium' level performance) and medium expert (where medium and expert data is mixed) (Fu et al., 2020). We examine the Umbrella Length task from BSUITE (Osband et al., 2020), a task involving a long sequential episode... We continue to the Gym-Mini Grid (Chevalier-Boisvert et al., 2018) set of partially-observable environments... This set of experiments uses the Mu Jo Co robotics simulator (Todorov et al., 2012) suite of continuous control tasks.
Dataset Splits No The paper mentions evaluating over '5 random seeds' but does not provide specific percentages, counts, or detailed methodology for train/validation/test dataset splits needed for reproduction.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions various algorithms and models (e.g., PPO, SAC, BRAC, RNN, transformer) but does not provide specific version numbers for software dependencies or libraries used in the implementation.
Experiment Setup Yes We show the hyper-parameters for each experiment in the Appendix D.