Making Deep Q-learning methods robust to time discretization

Authors: Corentin Tallec, Léonard Blier, Yann Ollivier

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we find that Q-learning-based approaches such as Deep Qlearning (Mnih et al., 2015) and Deep Deterministic Policy Gradient (Lillicrap et al., 2015) collapse with small time steps. Formally, we prove that Q-learning does not exist in continuous time. We detail a principled way to build an off-policy RL algorithm that yields similar performances over a wide range of time discretizations, and confirm this robustness empirically. We empirically show that standard Q-learning methods are not robust to changes in time discretization, exhibiting degraded performance, while our algorithm demonstrates substantial robustness.
Researcher Affiliation Collaboration 1TAckling the Underspecified, Universit e Paris Sud 2Facebook Artificial Intelligence Research. Correspondence to: Corentin Tallec <corentin.tallec@inria.fr>.
Pseudocode Yes Algorithm 1 DAU (Continuous actions)
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes We benchmark DAU against DDPG on classic control benchmarks: Pendulum, Cartpole, Bipedal Walker, Ant, and Half-Cheetah environments from Open AI Gym (Fig. 2).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper only mentions "RMSProp" without a version number and does not list other software dependencies with specific version numbers.
Experiment Setup Yes In all setups, we use the algorithms described in Alg. 1 and Supplementary Alg. 1. The variants of DDPG and DQN used are described in the Supplementary, as well as all hyperparameters. For all setups, quantitative results are averaged over five runs. learning rates need only be scaled as δt instead of δt. With a fixed batch size, RMSProp will multiply gradients by a factor O(δt).