Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Asynchronous Credit Assignment for Multi-Agent Reinforcement Learning
Authors: Yongheng Liang, Hejun Wu, Haitao Wang, Hao Cai
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our framework consistently outperforms state-of-the-art MARL methods on challenging tasks while providing improved interpretability for asynchronous cooperation. Extensive experimental results show that MVD achieves considerable performance improvements in complex scenarios and provides easy-to-understand interaction processes among asynchronous decisions. We run experiments across multiple benchmarks, focusing on three key aspects of our framework in asynchronous cooperation: the necessity of additional computation, effectiveness against baselines, and generalization in complex tasks. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, Guangdong, China 3College of Mathematics and Computer Science, Shantou University, Shantou, China EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | The pseudo-code for MVD is in Appendix D. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate MVD on a modified asynchronous variant of the classic MARL benchmark SMAC [Samvelyan et al., 2019], along with two prominent asynchronous benchmarks: Overcooked [Wang et al., 2020b] and POAC [Yao et al., 2021]. |
| Dataset Splits | No | The paper mentions evaluating on benchmarks like SMAC, Overcooked, and POAC, but does not provide specific details on how the datasets were split into training, validation, or test sets within the main text. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers required to replicate the experiments. |
| Experiment Setup | No | The paper refers to "Details of benchmarks, baselines, and our MVD are provided in Appendix E" but the main text itself does not contain specific experimental setup details such as hyperparameter values, batch sizes, learning rates, or optimizer settings. |