Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members

Authors: Daphne Cornelisse, Thomas Rood, Yoram Bachrach, Mateusz Malinowski, Tal Kachman

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical evaluation shows that the predictions for the various solutions (the Shapley value, Banzhaf index and Least-Core) accurately reflect the true game theoretic solutions on previously unobserved games. Furthermore, the resulting model can generalize even to games that are very far from the training distribution or with more players than the games in the training set.
Researcher Affiliation Collaboration Daphne Cornelisse1 Thomas Rood1 Mateusz Malinowski3 Yoram Bachrach3 Tal Kachman1,2 1Department of Artificial Intelligence, Radboud University, Netherlands 2Donders Institute for Brain, Cognition and Behavior, Radboud University, Netherlands 3Deep Mind, UK
Pseudocode No The paper includes diagrams of data generation and model architecture but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We take the Melbourne Housing dataset [39] and obtain 9000 instances by 13 features after preprocessing steps (encoding the categorical features and standardization). ... Here, we consider two well-known datasets: the classical UCI Bank Marketing dataset [35] (17 features, 11,162 observations) and the Melbourne Housing dataset [39] (13 features, 34,857 observations).
Dataset Splits No The paper mentions partitioning the dataset into train and test sets but does not explicitly specify a validation set or precise percentages/counts for the splits to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using the "SHAP package (Kernel Explainer)" but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes During training we minimize the Mean square error (MSE) between the true and predicted solutions. For each increment, we train a model for 100 epochs and test it on the remainder of the unseen instances.