The Efficiency of the HyperPlay Technique Over Random Sampling
Authors: Michael Schofield, Michael Thielscher
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We design experiments to expose the worst aspect of Hyper Play and look at the impact on competitive gameplay as well as remedies for any shortcomings. We present the experimental results along with comments that explain and highlight without drawing any conclusions. |
| Researcher Affiliation | Academia | Michael Schofield and Michael Thielscher School of Computer Science and Engineering UNSW Australia {mschofield, mit}@cse.unsw.edu.au The second author is also affiliated with the University of Western Sydney. |
| Pseudocode | No | The paper describes theoretical basis and experimental designs with mathematical equations, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statements about making its source code openly available or provide links to a code repository for the described methodology. |
| Open Datasets | Yes | The basket of games chosen for experiments was drawn from the games available within the GGP community, and from the newly converted security games. Additionally we supplement the games available in the GGP community by introducing GDL-II versions of two popular security games, the Transit game and the Border Protection game. Figure 1: A sample of the GDL-II description of the Transit security game highlighting the key aspects of the game. Tiltyard, Dresden game server. http://ggpserver.general-gameplaying.de/ggpserver/. Accessed: 2016-08-30. |
| Dataset Splits | No | The paper describes experiments conducted by playing games and analyzing performance across rounds, such as 'playing a batch of games and recording the states visited in each round', but it does not specify explicit dataset splits like 'training', 'validation', and 'test' sets with percentages or sample counts for reproducibility in a traditional machine learning sense. |
| Hardware Specification | Yes | When indicative times are given they are for computation performed by a single agent on an Intel Core i7-2600 @ 3.4GHz in a single thread. |
| Software Dependencies | No | The paper refers to the Game Description Language (GDL) and GDL-II specification, and mentions general game playing agents, but it does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their respective versions) that would be needed to reproduce the experiments. |
| Experiment Setup | Yes | The resources for each role are set so that it plays at well below the optimal level. This ensures good variety in the game-play and a broad base for the calculation of the statistic. Batch sizes were calculated to give statistically meaningful results. Generally each experiment had a batch size of 1000 games, or 10,000 observations. The resources for each player are set so that the player is competitive within a realistic time constraint based on the game complexity and the common competition times. |