Real-Time Navigation in Classical Platform Games via Skill Reuse
Authors: Michael Dann, Fabio Zambetta, John Thangarajah
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated SVP on randomly generated navigation puzzles in Infinite Mario. |
| Researcher Affiliation | Academia | School of Science, RMIT University, Australia |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | No | The paper states 'we modified the game’s level generator to create structures of the type shown in Figure 5' and that it uses 'randomly generated navigation puzzles in Infinite Mario', but does not provide concrete access information (link, DOI, or citation) for the specific dataset generated for their experiments. |
| Dataset Splits | No | The paper does not provide explicit training, validation, or test dataset splits with percentages or counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper describes the algorithms and techniques used (Q-Learning, experience replay) and references other papers, but does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | Other training parameters were guided by Mnih et al., then tuned to the values in Table 1 by hand. Table 1: Learning rate 2.5 10^-4, Action length 4 frames, Experience cache size frames, Exploration policy ϵ-greedy, Target net refresh rate 10,000 frames, Momentum 0.95, TD error sep. 3 actions, Min. cache pop. for train 50,000 frames, ϵ decay schedule Hyperbolic 1.0 0.05, Training time 2 10^8 frames. |