Learning Fairness in Multi-Agent Systems
Authors: Jiechuan Jiang, Zongqing Lu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that FEN easily learns both fairness and efficiency and significantly outperforms baselines in a variety of multi-agent scenarios. |
| Researcher Affiliation | Academia | Jiechuan Jiang Peking University jiechuan.jiang@pku.edu.cn Zongqing Lu Peking University zongqing.lu@pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 FEN training |
| Open Source Code | Yes | Moreover, the code of FEN is at https://github.com/PKU-AI-Edge/FEN. |
| Open Datasets | No | The paper describes custom-designed scenarios (job scheduling, the Matthew effect, manufacturing plant) rather than providing concrete access information (link, DOI, repository, formal citation) for publicly available or open datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions that PPO is used for training but does not provide specific software names with version numbers for other ancillary software or libraries. |
| Experiment Setup | Yes | The basic hyperparameters are all the same for FEN and the baselines, which are summarized in Appendix. The details about the experimental setting of each scenario are also available in Appendix. |