Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

GTDE: Grouped Training with Decentralized Execution for Multi-agent Actor-Critic

Authors: Mengxian Li, Qi Wang, Yongjun Xu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that in a cooperative environment with 495 agents, GTDE increased the total reward by an average of 382% compared to the baseline. In a competitive environment with 64 agents, GTDE achieved a 100% win rate against the baseline. ... We evaluate GTDE on the Star Craft Multi-Agent Challenge V2 (SMACv2) (Ellis et al. 2022) with 20 agents, the Battle scenario with 64 agents, and the Gather scenario with 495 agents (Zheng et al. 2018). Experiments have demonstrated that with an increasing number of agents, GTDE outperforms both DTDE and CTDE.
Researcher Affiliation Academia Mengxian Li1,2, Qi Wang1,2*, Yongjun Xu1,2 1Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences EMAIL, EMAIL
Pseudocode No The paper describes the methodology conceptually and mathematically, but does not include a specific section or figure labeled as "Pseudocode" or "Algorithm" with structured code-like steps.
Open Source Code Yes Code https://github.com/lemonsinx/GTDE
Open Datasets Yes We evaluate GTDE on the Star Craft Multi-Agent Challenge V2 (SMACv2) (Ellis et al. 2022) with 20 agents, the Battle scenario with 64 agents, and the Gather scenario with 495 agents (Zheng et al. 2018). ... MAgent. https://github.com/Farama-Foundation/MAgent. Git Hub repository.
Dataset Splits No The paper specifies training steps and testing rounds for the environments but does not describe static dataset splits (e.g., percentages or sample counts for training, validation, and testing datasets) in the way supervised learning typically would. For instance, it states: "For SMACv2, Battle, and Gather, we trained 10M, 2000, and 1200 environment steps, respectively." and "we conducted 200 rounds of testing on all models" but these refer to interaction durations or evaluation runs rather than predefined dataset partitions.
Hardware Specification Yes All algorithms were trained using an NVIDIA GeForce RTX 4090 GPU.
Software Dependencies No The paper does not explicitly mention specific software dependencies with their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For a fair comparison, all basic hyperparameters are set to be consistent. For SMACv2, Battle, and Gather, we trained 10M, 2000, and 1200 environment steps, respectively. For each scenario, we use 5 different random seeds for all algorithms. Additional detailed hyperparameters are addressed in Appendix B.