Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Coordinated Proximal Policy Optimization

Authors: Zifan Wu, Chao Yu, Deheng Ye, Junge Zhang, haiyin piao, Hankz Hankui Zhuo

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present Coordinated Proximal Policy Optimization (Co PPO), an algorithm that extends the original Proximal Policy Optimization (PPO) to the multi-agent setting. [...] Finally, we demonstrate that Co PPO outperforms several strong baselines and is competitive with the latest multi-agent PPO method (i.e. MAPPO) under typical multi-agent settings, including cooperative matrix games and the Star Craft II micromanagement tasks. ... In this section, we evaluate Co PPO on a modified matrix penalty game and the Star Craft Multi Agent Challenge (SMAC) (Samvelyan et al., 2019).
Researcher Affiliation Collaboration Zifan Wu School of Computer Science and Engineering Sun Yat-sen University, Guangzhou, China EMAIL Chao Yu School of Computer Science and Engineering Sun Yat-sen University, Guangzhou, China EMAIL Deheng Ye Tencent AI Lab, Shenzhen, China EMAIL Junge Zhang Institute of Automation Chinese Academy of Science, Beijing, China EMAIL Haiyin Piao School of Electronic and Information Northwestern Polytechnical University, Xian, China EMAIL Hankz Hankui Zhuo School of Computer Science and Engineering Sun Yat-sen University, Guangzhou, China EMAIL
Pseudocode Yes The overall Co PPO algorithm with the double clipping trick is shown in Appendix B.
Open Source Code No The paper does not contain any statement about releasing source code or provide a link to a code repository.
Open Datasets Yes In this section, we evaluate Co PPO on a modified matrix penalty game and the Star Craft Multi Agent Challenge (SMAC) (Samvelyan et al., 2019).
Dataset Splits Yes The win rates are tested over 32 evaluation episodes after each training iteration. ... The hyperparameters and other implementation details are described in Appendix C.1. ... The hyperparameter settings and other implementation details are presented in Appendix C.2.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions that "The implementation of these baselines follows the original versions" and that "our implementation is built on the one of MAPPO", with details in Appendix C.2. However, it does not specify any software or library names with their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The hyperparameters and other implementation details are described in Appendix C.1. ... The hyperparameter settings and other implementation details are presented in Appendix C.2.