DeCOM: Decomposed Policy for Constrained Cooperative Multi-Agent Reinforcement Learning

Authors: Zhaoxing Yang, Haiming Jin, Rong Ding, Haoyi You, Guiyun Fan, Xinbing Wang, Chenghu Zhou

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct extensive experiments to show the effectiveness of De COM with various types of costs in both moderate-scale and large-scale (with 500 agents) environments that originate from real-world applications.
Researcher Affiliation Academia Shanghai Jiao Tong University, Shanghai, China {yiannis, jinhaiming, dingrong, yuri-you, fgy726, xwang8}@sjtu.edu.cn, zhouchsjtu@gmail.com
Pseudocode Yes Algorithm 1: Training Algorithm of De COM
Open Source Code No The paper does not provide an explicit statement of code release or a link to a source code repository for the methodology described.
Open Datasets Yes CLFM is built with a public city-scale dataset7 that contains approximate 1 million orders from November 1 to November 30, 2016 in Chengdu, China. 7Data source: Di Di Chuxing GAIA Open Dataset Initiative (https://gaia.didichuxing.com).
Dataset Splits No The paper mentions using datasets for experiments but does not provide specific training/validation/test dataset splits (percentages, counts, or references to predefined splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory) used for running its experiments.
Software Dependencies No The paper mentions software components like 'MADDPG critics' and 'Mean-Field critics' but does not provide specific version numbers for any software dependencies, libraries, or solvers.
Experiment Setup Yes We set λ in CTC-safe, CDSN and CLFM as 1, and 0.01 in CTC-fair. Due to space limit, we put more discussions about chooing λ and detailed training curves in Appendix B.5.