Understanding and Diagnosing Deep Reinforcement Learning
Authors: Ezgi Korkmaz
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments in the Arcade Learning Environment (ALE), we demonstrate the effectiveness of our technique for identifying correlated directions of instability, and for measuring how sample shifts remold the set of sensitive directions in the neural policy landscape. |
| Researcher Affiliation | Academia | 1University College London (UCL). Correspondence to: Ezgi Korkmaz <ezgikorkmazmail@gmail.com>. |
| Pseudocode | Yes | Algorithm 1 RA-NLD: Robustness Analysis via Non Lipschitz Directions in the Deep Neural Policy Manifold |
| Open Source Code | No | No concrete access to source code for the methodology described in this paper is provided. |
| Open Datasets | Yes | Through experiments in the Arcade Learning Environment (ALE), we demonstrate the effectiveness of our technique for identifying correlated directions of instability... |
| Dataset Splits | No | The set of states S is collected over 10 episodes. |
| Hardware Specification | No | No specific hardware details (like GPU models, CPU types, or memory) used for running experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions algorithms used (Double Deep Q-Network, State-Adversarial Double Deep Q-Network) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The adversarial perturbation hyperparameters are: for the Carlini&Wagner formulation κ is 10, learning rate is 0.01, initial constant is 10, for the elastic-net regularization formulation β is 0.0001, learning rate is 0.1, maximum iteration is 300, for Nesterov Momentum ϵ is 0.001, and decay factor is 0.1. |