Representations for Stable Off-Policy Reinforcement Learning

Authors: Dibya Ghosh, Marc G. Bellemare

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conclude by empirically demonstrating that these stable representations can be learned using stochastic gradient descent, opening the door to improved techniques for representation learning with deep networks. We complement our theoretical results with an experimental evaluation, focusing on the following questions: How closely do the theoretical conditions we describe match stability requirements in practice? Can stable representations be learned using samples? Can they be learned using neural networks?
Researcher Affiliation Collaboration Dibya Ghosh 1 Marc G. Bellemare 1 1Google Research. Correspondence to: Dibya Ghosh <dibya.ghosh@berkeley.edu>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a statement or link for the open-source code of the methodology described.
Open Datasets Yes We conduct our study in the four-room domain (Sutton et al., 1999).
Dataset Splits Yes We used 50k transitions for training and 10k for evaluation.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions
Experiment Setup Yes Neural Network Experiments. We used a simple feed-forward network with 2 hidden layers with 128 nodes. All layers use ReLU activations, and the training was performed using Adam optimizer with a learning rate of 1e-4. The input to the network is a one-hot encoding of the (state, action) pair.