A Regression Approach for Modeling Games With Many Symmetric Players
Authors: Bryce Wiedenbeck, Fengjun Yang, Michael Wellman
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate experimentally that the combination of learned utility functions and expected payoff estimation allows us to efficiently identify approximate equilibria of large games using sparse payoff data.In all of our experiments, we generate random large games, which we represent compactly as action-graph games with additive function nodes (Jiang, Leyton-Brown, and Bhat 2011). This ensures that we can compare the results of various approximation methods to a ground truth, checking the expected payoff of mixed strategies, and the regret of approximate equilibria. |
| Researcher Affiliation | Academia | Bryce Wiedenbeck Swarthmore College bwieden1@swarthmore.edu Fengjun Yang Swarthmore College fyang1@swarthmore.edu Michael P. Wellman University of Michigan wellman@umich.edu |
| Pseudocode | No | The paper describes methods verbally and mathematically but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The text does not contain any statements about code availability or links to a code repository. |
| Open Datasets | No | In all of our experiments, we generate random large games, which we represent compactly as action-graph games with additive function nodes (Jiang, Leyton-Brown, and Bhat 2011). No link, DOI, or specific repository is provided for these generated games. |
| Dataset Splits | No | The paper does not specify explicit training/validation/test dataset splits, percentages, or sample counts needed for reproduction. It mentions "Data Selection" but focuses on the characteristics of selected profiles, not dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running experiments. |
| Software Dependencies | No | The neural network used hidden layers of 32, 16, 8, and 4 nodes, with sigmoid activation functions, 0.2 dropout probability and the Adam optimizer. No version numbers for software or libraries are provided. |
| Experiment Setup | Yes | The neural network used hidden layers of 32, 16, 8, and 4 nodes, with sigmoid activation functions, 0.2 dropout probability and the Adam optimizer.The RBF kernel has an important hyperparameter l, the length-scale over which the function is expected to vary. This can be estimated by MLE, but in our experiments, we found it important to constrain l to the range [ 1 , |P| ], and that a length scale close to these bounds was sometimes evidence of a poor fit. |