Adversarial Robust Deep Reinforcement Learning Requires Redefining Robustness
Authors: Ezgi Korkmaz
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run multiple experiments in the Arcade Learning Environment (ALE) |
| Researcher Affiliation | Academia | The paper only lists the author name 'Ezgi Korkmaz' without any institutional affiliation or email domain. |
| Pseudocode | Yes | Algorithm 1: Probing Neural Manifold with High-sensitivity Directions within Perceptual Similarity |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of its own source code. |
| Open Datasets | Yes | The Arcade Learning Environment (Bellemare et al. 2013). |
| Dataset Splits | No | The paper states that results are from '10 independent runs' but does not specify explicit training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions several algorithms and frameworks like 'Deep Q-Network', 'prioritized experience replay', 'SA-DDQN', 'RADIAL', and 'Open AI Gym', but it does not specify version numbers for any of them. |
| Experiment Setup | No | The paper describes the general training methods and perturbation parameters (e.g., brightness/contrast values, rotation degrees) but does not provide specific hyperparameters for the deep reinforcement learning policy training itself, such as learning rate, batch size, or optimizer settings. |