Efficient Multi-agent Communication via Self-supervised Information Aggregation
Authors: Cong Guan, Feng Chen, Lei Yuan, Chenghe Wang, Hao Yin, Zongzhang Zhang, Yang Yu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate that our method significantly outperforms strong baselines on multiple cooperative MARL tasks for various task settings. |
| Researcher Affiliation | Collaboration | Cong Guan1 , Feng Chen1 , Lei Yuan1,2, Chenghe Wang1, Hao Yin1, Zongzhang Zhang1, Yang Yu1,2 1 National Key Laboratory for Novel Software Technology, Nanjing University 2 Polixir Technologies |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The codes are available at https://github.com/chenf-ai/MASIA |
| Open Datasets | Yes | To evaluate our method, we conduct extensive experiments on various cooperative multi-agent benchmarks, including Hallway [48], Level-Based Foraging [34], Traffic Junction [4], and two maps from Star Craft Multi-Agent Challenge (SMAC) [48]. |
| Dataset Splits | No | The paper evaluates on benchmarks and reports "Median Test Win Rate %", implying a test set, but does not explicitly detail training, validation, and test dataset splits (e.g., specific percentages or sample counts). |
| Hardware Specification | No | The paper states that hardware specifications are presented in Appendix A.3, but this appendix is not provided in the given text. The main body of the paper does not specify the exact hardware used (e.g., specific GPU models, CPU models, or cloud instances). |
| Software Dependencies | Yes | Our experiments are all based on the Py MARL framework, which uses SC2.4.6.2.6923. |
| Experiment Setup | Yes | Details about benchmarks, network architecture and hyper-parameter choices of our method are all presented in Appendices A.1, and A.3, respectively. |