Regret-Based Pruning in Extensive-Form Games
Authors: Noam Brown, Tuomas Sandholm
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show an order of magnitude speed improvement, and the relative speed improvement increases with the size of the game. |
| Researcher Affiliation | Academia | Noam Brown Computer Science Department Carnegie Mellon University Pittsburgh, PA 15217 noamb@cmu.edu Tuomas Sandholm Computer Science Department Carnegie Mellon University Pittsburgh, PA 1546752 sandholm@cs.cmu.edu |
| Pseudocode | No | The paper describes the algorithms and modifications textually but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any links to open-source code or explicit statements about code availability. |
| Open Datasets | Yes | We tested our algorithm on standard Leduc Hold em [12] and a scaled-up variant of it featuring more actions. |
| Dataset Splits | No | The paper mentions using "standard Leduc Hold em" and a "scaled-up variant" for testing, but it does not specify any training, validation, or test dataset splits in terms of percentages or sample counts, which is typical for empirical evaluation on datasets. |
| Hardware Specification | No | The paper acknowledges "XSEDE computing resources provided by the Pittsburgh Supercomputing Center" but does not specify any particular hardware details such as GPU models, CPU types, or memory configurations used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, TensorFlow 2.x) that would be needed to reproduce the experiments. |
| Experiment Setup | No | The paper discusses aspects of the algorithm, such as the pruning threshold, but does not provide specific hyperparameters or system-level training settings like learning rates, batch sizes, or optimizer configurations that are commonly detailed in experimental setups. |