Contrastive Identity-Aware Learning for Multi-Agent Value Decomposition

Authors: Shunyu Liu, Yihe Zhou, Jie Song, Tongya Zheng, Kaixuan Chen, Tongtian Zhu, Zunlei Feng, Mingli Song

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the SMAC benchmarks and across different VD backbones demonstrate that the proposed method yields results superior to the state-of-the-art counterparts.
Researcher Affiliation Academia Zhejiang University liushunyu@zju.edu.cn, yihe zhou@zju.edu.cn, sjie@zju.edu.cn, tyzheng@zju.edu.cn, chenkx@zju.edu.cn, raiden@zju.edu.cn, zunleifeng@zju.edu.cn, brooksong@zju.edu.cn
Pseudocode Yes To make the proposed CIA clearer to readers, we provide the pseudocode in Appendix A.
Open Source Code Yes Our code is available at https://github.com/liushunyu/CIA.
Open Datasets Yes We conduct experiments on the didactic game and the Star Craft II micromanagement challenge. ... The Star Craft Multi-Agent Challenge (SMAC)2 (Samvelyan et al. 2019) has become a common-used benchmark for evaluating state-of-the-art MARL methods.
Dataset Splits No The paper uses the SMAC benchmark but does not explicitly provide details about training, validation, and test dataset splits (e.g., percentages or counts for each split).
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as CPU or GPU models.
Software Dependencies No The paper mentions 'Python MARL framework (Py MARL)' but does not specify its version or any other software dependencies with version numbers, except for the game environment version 'SC2.4.10'.
Experiment Setup Yes The detailed hyperparameters are given in Appendix B, where the common training parameters across different methods are consistent.