Robust Multi-Agent Reinforcement Learning via Minimax Deep Deterministic Policy Gradient
Authors: Shihui Li, Yi Wu, Xinyue Cui, Honghua Dong, Fei Fang, Stuart Russell4213-4220
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically evaluate our M3DDPG algorithm in four mixed cooperative and competitive multi-agent environments and the agents trained by our method significantly outperforms existing baselines. |
| Researcher Affiliation | Academia | Carnegie Mellon University, {shihuil,feifang}@cmu.edu University of California, Berkeley, {jxwuyi,russell}@eecs.berkeley.edu Tsinghua University, {cuixy14,dhh14}@mails.tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1: Minimax Multi-Agent Deep Deterministic Policy Gradient (M3DDPG) for N agents |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing source code for the described methodology or a direct link to a code repository. |
| Open Datasets | No | The paper mentions using 'particle-world environments' adopted from a cited paper (Lowe et al. 2017), but does not provide concrete access information (link, DOI, repository, or explicit dataset citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | α is selected from a grid search over 0.1, 0.01 and 0.001. |