Exploring the Benefits of Teams in Multiagent Learning

Authors: David Radke, Kate Larson, Tim Brecht

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through an extensive empirical evaluation, we show how our model of teams helps agents develop globally beneficial pro-social behavior despite short-term incentives to defect. As a result, agents in these teams achieve higher rewards in complex domains than when the interests of all agents are aligned, and autonomously learn more efficient combinations of roles when compared with common interest scenarios.
Researcher Affiliation Academia David Radke , Kate Larson and Tim Brecht David R. Cheriton School of Computer Science, University of Waterloo {dtradke, kate.larson, brecht}@uwaterloo.ca
Pseudocode No No structured pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Code: https://github.com/Dtradke/Teams_IPD
Open Datasets Yes We implement our model of teams in the Iterated Prisoner s Dilemma (IPD) [Rapoport, 1974] and the Cleanup domain game [Vinitsky et al., 2019]. ... Cleanup [Vinitsky et al., 2019] ... An open source implementation of sequential social dilemma games. https://github.com/eugenevinitsky/ sequential social dilemma games/issues/182, 2019.
Dataset Splits No No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) is provided, as the experiments are in a reinforcement learning setup where data is generated through interaction rather than pre-partitioned.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) are provided for running the experiments.
Software Dependencies No The paper mentions specific algorithms (Deep Q-Learning, PPO) but does not list any specific software packages with version numbers, such as programming languages, libraries, or solvers.
Experiment Setup Yes In the IPD, each experiment lasts 1.0 x 10^6 episodes where N = 30 agents learn using Deep Q-Learning [Mnih et al., 2015]. An episode is defined by a set of agent interactions where each agent is paired with another agent and plays an instance of the Prisoner s Dilemma. ... Each experiment is repeated five times. ... We fix the cost (c) at 1, and let the benefit (b) be 2, 5, or 10. ... In Cleanup, similar to previous work [Hughes et al., 2018; Mc Kee et al., 2020; Jaques et al., 2019], we experiment with N = 6 agents. Our agents use the Proximal Policy Optimization (PPO) [Schulman et al., 2017] RL algorithm for 1.6 x 10^8 environmental timesteps (each episode is 1,000 timesteps). Agent observability is limited to a 15 x 15 RGB window. Teammates share the same color and optimize for TRi calculated at each environmental timestep. Each experiment is repeated for eight trials.