Evaluating and Rewarding Teamwork Using Cooperative Game Abstractions

Authors: Tom Yan, Christian Kroer, Alexander Peysakhovich

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our methods to study teams of artificial RL agents as well as real world teams from professional sports. Empirically, we first validate the usefulness of CGAs in artificial RL environments, in which we can verify the predictions of CGAs on counterfactual teams. Then, we model real world data from the NBA, for which we do not have ground truth, using CGAs and show that its predictions are consistent with expert knowledge and various metrics of team strength and player value.
Researcher Affiliation Collaboration Tom Yan Carnegie Mellon University Facebook AI Research tyyan@cmu.edu Christian Kroer Columbia University Facebook Core Data Science christian.kroer@columbia.edu Alexander Peysakhovich Facebook AI Research alexpeys@fb.com
Pseudocode No The paper does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present any structured algorithmic steps.
Open Source Code No The paper links to external resources (OpenAI particle environment and Kaggle NBA data) that were used in the experiments but does not provide any statement or link for the authors' own source code for the methodology described in the paper.
Open Datasets Yes We generate team performance data from the Open AI particle environment [23] 1https://github.com/openai/multiagent-particle-envs and We collect the last 6 seasons of NBA games (a total of 7380 games) from Kaggle along with the publicly available box scores 2.https://www.kaggle.com/drgilermo/nba-players-stats
Dataset Splits Yes The train/validation/test split is 50/10/40. We split the dataset randomly into 80 percent training, 10 percent validation, and 10 percent test subsets.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running its experiments.
Software Dependencies No The paper mentions using the 'Open AI Git Hub repo' for parameters but does not specify any software names with version numbers for dependencies or specific libraries used for implementation.
Experiment Setup Yes Then we learn ˆv such that it minimizes the negative log likelihood using standard batch SGD with learning rate 0.001. Because basketball teams are of a fixed size (only one set of sizes), we use L2 regularization to choose one among the many possible set of models parameters. We set hyperparameters by optimizing the loss on the validation set.