Structure Learning for Approximate Solution of Many-Player Games
Authors: Zun Li, Michael Wellman2119-2127
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally demonstrate the efficacy of both methods in reaching quality solutions and uncovering hidden structure, on both perfectly and approximately structured game instances. |
| Researcher Affiliation | Academia | Zun Li, Michael P. Wellman University of Michigan, Ann Arbor {lizun, wellman}@umich.edu |
| Pseudocode | Yes | Algorithm 1: K-Roles ... Algorithm 2: Greedy Graphical Game Learning |
| Open Source Code | No | The paper does not provide any links to source code or explicitly state that source code for the described methodology is available. |
| Open Datasets | No | For games with perfect role structure, we generate the cluster assignment from a uniform distribution. For games with an underlying graphical model, we generate a directed random graph with expected number of neighbors 5. ... The paper describes generating its own datasets but does not provide access information (e.g., links or citations for public availability). |
| Dataset Splits | Yes | We maintain a data buffer of size 1000, and query 100 data points as Dval in each iteration. ... We represent each agent by its individual point deviation payoffs based on the current model and the validation data set Dval |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions machine learning techniques and algorithms (e.g., k-means, hierarchical clustering, linear regression, multilayer perceptron, random forest, gradient boosting) but does not provide specific version numbers for any software dependencies or libraries used. |
| Experiment Setup | Yes | For regression of deviation function approximators, we use a neural network with two hidden layers of sizes 32 and 16. ... First we try hierarchical agent clustering with p = 2. If any returned cluster is of size below 20, we discard the result and apply k-means clustering instead. ... We set ˆκ = 6 if M = 2 and ˆκ = 4 when M = 3. |