Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning
Authors: El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, Alexandre Maurer
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper introduces dynamic safe interruptibility, an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: joint action learners and independent learners. We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners. We show however that if agents can detect interruptions, it is possible to prune the observations to ensure dynamic safe interruptibility even for independent learners. |
| Researcher Affiliation | Academia | El Mahdi El Mhamdi EPFL, Switzerland elmahdi.elmhamdi@epfl.ch Rachid Guerraoui EPFL, Switzerland rachid.guerraoui@epfl.ch Hadrien Hendrikx Ecole Polytechnique, France hadrien.hendrikx@gmail.com Alexandre Maurer EPFL, Switzerland alexandre.maurer@epfl.ch |
| Pseudocode | No | The paper contains mathematical definitions, theorems, and proofs, but it does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper focuses on theoretical definitions and proofs within multi-agent reinforcement learning frameworks, using conceptual examples (like self-driving cars and matrix games) rather than actual datasets. Therefore, it does not mention or provide access to any specific training dataset. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with data. Therefore, there is no mention of dataset splits for training, validation, or testing. |
| Hardware Specification | No | This is a theoretical paper and does not describe any computational experiments. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | This paper is theoretical and discusses concepts, definitions, and mathematical proofs within reinforcement learning. It does not describe any specific software implementations or dependencies with version numbers. |
| Experiment Setup | No | This is a theoretical paper that defines concepts and provides proofs; it does not describe an empirical experimental setup, hyperparameters, or training configurations. |