Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning
Authors: James Queeney, Mouhacine Benosman
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments on continuous control tasks with safety constraints, we demonstrate that our framework produces robust performance and safety at deployment time across a range of perturbed test environments. |
| Researcher Affiliation | Collaboration | James Queeney Division of Systems Engineering Boston University jqueeney@bu.edu Mouhacine Benosman Mitsubishi Electric Research Laboratories benosman@merl.com |
| Pseudocode | Yes | Algorithm 1 Risk-Averse Model Uncertainty for Safe RL |
| Open Source Code | Yes | Code is publicly available at https://github.com/jqueeney/robust-safe-rl. |
| Open Datasets | Yes | we conduct experiments on 5 continuous control tasks with safety constraints from the Real-World RL Suite [18, 19] |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, and testing. |
| Hardware Specification | Yes | All experiments were run on a Linux cluster with 2.9 GHz Intel Gold processors and NVIDIA A40 and A100 GPUs. |
| Software Dependencies | No | The paper mentions software components like 'ELU activations' and 'multivariate Gaussian policy' but does not specify their version numbers or the versions of underlying deep learning frameworks or libraries. |
| Experiment Setup | Yes | Table 5: Network architectures and algorithm hyperparameters used in experiments |