Efficient Computation of Approximate Equilibria in Discrete Colonel Blotto Games
Authors: Dong Quan Vu, Patrick Loiseau, Alonso Silva
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform numerical experiments that show that the proposed strategy provides a fast and good approximation to the equilibrium even for moderate numbers of battlefields. Our experiments run on a computer with an Intel core i5-7500U 2.60GHz processor and 8GB of RAM. |
| Researcher Affiliation | Collaboration | 1 Nokia Bell Labs, Nokia Paris-Saclay, 91620 Nozay, France 2 Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG, 38000 Grenoble, France 3 Max Planck Institute for Software Systems (MPI-SWS), Saarbr ucken, Germany |
| Pseudocode | Yes | Algorithm 1: DIU strategy generation algorithm. Algorithm 2: Dynamic programing algorithm searching for player B s best-response (tie-breaking rule α = 0). |
| Open Source Code | Yes | Our code for these experiments can be found at https://github. com/dongquan11/Approx discrete Blotto |
| Open Datasets | No | For each set of values n, m, p, we independently generate a value for each battlefield uniformly distributed in [vmin, vmax], with vmin = 1 and vmax = 8. The paper describes synthetic data generation but does not provide access information for a public dataset. |
| Dataset Splits | No | The paper describes generating game instances and running simulations, but it does not specify train/validation/test splits for a dataset. |
| Hardware Specification | Yes | Our experiments run on a computer with an Intel core i5-7500U 2.60GHz processor and 8GB of RAM. |
| Software Dependencies | No | We construct several numerical experiments using R. The paper mentions the software 'R' but does not specify a version number or any other software dependencies with versions. |
| Experiment Setup | Yes | In all the experiments, we keep α = 0 and λ = p/m fixed, thus the values of m and p always have the same growth rate (up to the multiplicative constant λ); and we vary n and m. For each set of values n, m, p, we independently generate a value for each battlefield uniformly distributed in [vmin, vmax], with vmin = 1 and vmax = 8. Then, for each instance of n, m, p, (v1, v2, . . . , vn) we run the simulation 3 times, each time computing the e CDFs with K = 10 n to ensure not to affect the evaluation on ε and running Algorithm 2 with these e CDFs to compute ε, and take the average of results. |