Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning Fairness in Multi-Agent Systems
Authors: Jiechuan Jiang, Zongqing Lu
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that FEN easily learns both fairness and efficiency and significantly outperforms baselines in a variety of multi-agent scenarios. |
| Researcher Affiliation | Academia | Jiechuan Jiang Peking University EMAIL Zongqing Lu Peking University EMAIL |
| Pseudocode | Yes | Algorithm 1 FEN training |
| Open Source Code | Yes | Moreover, the code of FEN is at https://github.com/PKU-AI-Edge/FEN. |
| Open Datasets | No | The paper describes custom-designed scenarios (job scheduling, the Matthew effect, manufacturing plant) rather than providing concrete access information (link, DOI, repository, formal citation) for publicly available or open datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions that PPO is used for training but does not provide specific software names with version numbers for other ancillary software or libraries. |
| Experiment Setup | Yes | The basic hyperparameters are all the same for FEN and the baselines, which are summarized in Appendix. The details about the experimental setting of each scenario are also available in Appendix. |