PettingZoo: Gym for Multi-Agent Reinforcement Learning

Authors: J Terry, Benjamin Black, Nathaniel Grammel, Mario Jayakumar, Ananth Hari, Ryan Sullivan, Luis S Santos, Clemens Dieffendahl, Caroline Horsch, Rodrigo Perez-Vicente, Niall Williams, Yashas Lokesh, Praveen Ravi

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We argue, in part through case studies on major problems in popular MARL environments, that the popular game models are poor conceptual models of games commonly used in MARL and accordingly can promote confusing bugs that are hard to detect, and that the AEC games model addresses these problems. We experimentally validate this performance improvement in Appendix A.1, showing that on average this change resulted in up to a 22% performance in the expected reward of a learned policy.
Researcher Affiliation Collaboration J. K. Terry j.k.terry@swarmlabs.com Benjamin Black benjamin.black@swarmlabs.com Nathaniel Grammel ngrammel@umd.edu Mario Jayakumar mariojay@umd.edu Ananth Hari ahari1@umd.edu Ryan Sullivan ryan.sullivan@swarmlabs.com Luis Santos lss@umd.edu Rodrigo Perez rlazcano@umd.edu Caroline Horsch caroline.horsch@swarmlabs.com Clemens Dieffendahl dieffendahl@campus.tu-berlin.de Niall L. Williams niallw@umd.edu Yashas Lokesh yashloke@umd.edu Praveen Ravi pravi@umd.edu Swarm Labs Department of Computer Science | University of Maryland, College Park Department of Electrical and Computer Engineering | University of Maryland, College Park Department of Mechanical Engineering | University of Maryland, College Park Maryland Robotics Center | University of Maryland, College Park Faculty of Electrical Engineering and Computer Science | Technical University of Berlin
Pseudocode No The paper includes code examples for API usage, but it does not provide formal pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The Petting Zoo library can be installed via pip install pettingzoo, the documentation is available at https://www.pettingzoo.ml, and the repository is available at https://github.com/ Farama-Foundation/Petting Zoo.
Open Datasets Yes Similar to Gym s default environments, Petting Zoo includes 63 environments. Half of the included environment classes (MPE, MAgent, and SISL), despite their popularity, existed as unmaintained research grade code, have not been available for installation via pip, and have required large amounts of maintenance to run at all before our cleanup and maintainership. We additionally included multiplayer Atari games from Terry and Black [2020], Butterfly environments which are original and of our own creation, and popular classic board and card game environments. All default environments included are surveyed in depth in Appendix B.
Dataset Splits No The paper describes experimental validation and discusses environments, but it does not specify concrete train/validation/test dataset splits, percentages, or cross-validation methodologies for its experiments.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used to run its experiments.
Software Dependencies No The paper mentions Petting Zoo can be installed via 'pip install pettingzoo' and lists several learning libraries that support it. However, it does not specify concrete version numbers for these, or any other, software dependencies required for reproducing its own experiments.
Experiment Setup No The paper describes the design of the Petting Zoo library and discusses its benefits, including experimental validation in Appendix A.1. However, it does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings within the main body of the paper.