SMIX(λ): Enhancing Centralized Value Functions for Cooperative Multi-Agent Reinforcement Learning
Authors: Chao Wen, Xinghu Yao, Yuhui Wang, Xiaoyang Tan7301-7308
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the Star Craft Multi-Agent Challenge (SMAC) benchmark show that the proposed SMIX(λ) algorithm outperforms several state-of-the-art MARL methods by a large margin, and that it can be used as a general tool to improve the overall performance of a CTDE-type method by enhancing the evaluation quality of its CVF. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 211106, China {chaowen, xinghuyao, y.wang, x.tan}@nuaa.edu.cn |
| Pseudocode | No | The provided text does not contain structured pseudocode or algorithm blocks. It mentions 'The general training procedure for SMIX(λ) is provided in the Supplementary' but this content is not included in the provided paper text. |
| Open Source Code | Yes | We open-source our code at: https://github.com/chaovven/SMIX. |
| Open Datasets | Yes | We evaluate the algorithms on the Star Craft Multi-Agent Challenge (SMAC) (Samvelyan et al. 2019) benchmark, which provides a set of rich and challenging cooperative scenarios. Refer to Samvelyan et al. (2019) for full details of the environment. |
| Dataset Splits | No | The paper mentions a 'training phase' and evaluation on a 'test win rate' after training, but does not explicitly provide details about a separate validation set or split for hyperparameter tuning or early stopping during training. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries). |
| Experiment Setup | Yes | SMIX(λ) adopts the same architecture as QMIX (Rashid et al. 2018), except that SMIX(λ) performs the centralized value function estimation with λ-return (λ = 0.8) calculated from a batch of 32 episodes. The batch is sampled uniformly from a replay buffer that stores the most recent 1500 episodes. We run 4 episodes simultaneously. |