Learning Efficient and Robust Multi-Agent Communication via Graph Information Bottleneck

Authors: Shifei Ding, Wei Du, Ling Ding, Lili Guo, Jian Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results demonstrate that MAGI is more robust and efficient than state-of-the-art GNN-based MARL methods. In this section, we conduct various experiments on two complicated environments including Star Craft II Multi Agent Challenge (SMAC)(Samvelyan et al. 2019) and MAgent(Zheng et al. 2018) to answer: Q1: Are MAGI and other GNN-based MACRL methods vulnerable to adversarial attacks and noise? Q2: Can MAGI module improve the robustness of communication learning under adversarial attacks and noise? Q3: Can MAGI scale to large-scale multi-agent settings? Q4: Which components contribute to the performance of MAGI? Q5: How does λ and η influence the performance of MAGI. Q6: Can MAGI perform well under more complicated adversarial attack methods?
Researcher Affiliation Academia Shifei Ding1,2, Wei Du1,*, Ling Ding3,*, Lili Guo1,2, Jian Zhang1,2 1 School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China 2 Mine Digitization Engineering Research Center of Ministry of Education of the People s Republic of China, Xuzhou 221116, China 3College of Intelligence and Computing, Tianjin University, Tianjin, 300350, China dingsf@cumt.edu.cn, 1394471165@qq.com, dltjdx2022@tju.edu.cn, liliguo@cumt.edu.cn, zhangjian10231209@cumt.edu.cn
Pseudocode Yes Algorithm 1: MAGI
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes In this section, we conduct various experiments on two complicated environments including Star Craft II Multi Agent Challenge (SMAC)(Samvelyan et al. 2019) and MAgent(Zheng et al. 2018)
Dataset Splits No The paper mentions 'training' and 'testing' phases/periods but does not provide specific train/validation/test dataset split percentages, absolute sample counts for each split, or references to predefined splits for reproduction.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments (e.g., specific GPU/CPU models, memory, or cloud instance types).
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries).
Experiment Setup Yes We set λ = 0.1 based on experimental results, which can be found in Ablations. we inject Gaussian noise η ϕ ϵ for each agent feature, where ϵ N(0, 1), and η denote the feature noise ratio. We evaluate the robustness of GNN-based MACRL methods with different parameters η {0.5, 1, 1.5}. As shown in Table 3, with different λ (λ {0.05, 0.10, 0.15}), MAGI achieve the best performance with λ = 0.15 in MMM3 scenario and with λ = 0.10 in other scenarios. Thus, for the sake of consistency, we set the λ = 0.10 for all scenarios.