Learning Fair Policies in Decentralized Cooperative Multi-Agent Reinforcement Learning

Authors: Matthieu Zimmer, Claire Glanois, Umer Siddique, Paul Weng

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments, we demonstrate the importance of the two sub-networks for fair optimization. Our overall approach is general as it can accommodate any (sub)differentiable welfare function. Therefore, it is compatible with various notions of fairness that have been proposed in the literature (e.g., lexicographic maximin, generalized Gini social welfare function, proportional fairness). Our method is generic and can be implemented in various MARL settings: centralized training and decentralized execution, or fully decentralized. Finally, we experimentally validate our approach in various domains and show that it can perform much better than previous methods, both in terms of efficiency and equity.
Researcher Affiliation Academia 1UM-SJTU Joint Institute, Shanghai Jiao Tong University, China 2Department of Automation, Shanghai Jiao Tong University, Shanghai, China.
Pseudocode Yes Algorithm 1 SOTO algorithm in CLDE scenario
Open Source Code Yes The detailed hyperparameters are provided and in Appendix D.1 and available online3. 3https://gitlab.com/AAAL/DFRL
Open Datasets Yes To test our algorithms, we carried out experiments in three different domains (detailed descriptions is available in the appendix): Matthew Effect (Jiang & Lu, 2019), distributed traffic light control (Lopez et al., 2018) and distributed data center control (Ruffy et al., 2019).
Dataset Splits No The paper references datasets and mentions using 'train' for training and 'test' for evaluation (e.g., 'Matthew Effect'), but does not explicitly state training/validation/test split percentages or sample counts in the main text. Details are deferred to Appendix D.1 for hyperparameters, which may or may not include full split information.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments. It only mentions 'Grid5000 testbed' without further specifications.
Software Dependencies No The paper mentions using the Proximal Policy Optimization (PPO) algorithm and Adam optimizer, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes The detailed hyperparameters are provided and in Appendix D.1 and available online3.