Coordinated Proximal Policy Optimization
Authors: Zifan Wu, Chao Yu, Deheng Ye, Junge Zhang, haiyin piao, Hankz Hankui Zhuo
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present Coordinated Proximal Policy Optimization (Co PPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting. [...] Finally, we demonstrate that Co PPO outperforms several strong baselines and is competitive with the latest multi-agent PPO method (i.e. MAPPO) under typical multi-agent settings, including cooperative matrix games and the Star Craft II micromanagement tasks. ... In this section, we evaluate Co PPO on a modified matrix penalty game and the Star Craft Multi Agent Challenge (SMAC) (Samvelyan et al., 2019). |
| Researcher Affiliation | Collaboration | Zifan Wu School of Computer Science and Engineering Sun Yat-sen University, Guangzhou, China wuzf5@mail2.sysu.edu.cn Chao Yu School of Computer Science and Engineering Sun Yat-sen University, Guangzhou, China yuchao3@mail.sysu.edu.cn Deheng Ye Tencent AI Lab, Shenzhen, China dericye@tencent.com Junge Zhang Institute of Automation Chinese Academy of Science, Beijing, China jgzhang@nlpr.ia.ac.cn Haiyin Piao School of Electronic and Information Northwestern Polytechnical University, Xian, China haiyinpiao@mail.nwpu.edu.cn Hankz Hankui Zhuo School of Computer Science and Engineering Sun Yat-sen University, Guangzhou, China zhuohank@mail.sysu.edu.cn |
| Pseudocode | Yes | The overall Co PPO algorithm with the double clipping trick is shown in Appendix B. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | In this section, we evaluate Co PPO on a modified matrix penalty game and the Star Craft Multi Agent Challenge (SMAC) (Samvelyan et al., 2019). |
| Dataset Splits | Yes | The win rates are tested over 32 evaluation episodes after each training iteration. ... The hyperparameters and other implementation details are described in Appendix C.1. ... The hyperparameter settings and other implementation details are presented in Appendix C.2. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions that "The implementation of these baselines follows the original versions" and that "our implementation is built on the one of MAPPO", with details in Appendix C.2. However, it does not specify any software or library names with their version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The hyperparameters and other implementation details are described in Appendix C.1. ... The hyperparameter settings and other implementation details are presented in Appendix C.2. |