AIBIRDS: The Angry Birds Artificial Intelligence Competition
Authors: Jochen Renz
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also summarise some highlights of past competitions, describe which methods were successful, and give an outlook to proposed variants of the competition. The preformance of agants is improving significantly, as measured in the Human vs Machine challenge. In 2013, agents were clearly better than beginners, while in 2014 the best agents are already in the top third of human players. Interestingly, the two best teams in 2014 were both new teams, so newcomers have a good chance of doing well and are encouraged to participate. We also benchmarked all teams using the standard game levels. |
| Researcher Affiliation | Academia | Jochen Renz Research School of Computer Science The Australian National University jochen.renz@anu.edu.au |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions 'aibirds.org' and 'aibirds.org/snap' which are websites related to the competition and its Snap! implementation. It also describes modules provided to participants (e.g., computer vision module, trajectory planning module) and a 'Naive Agent'. However, it does not explicitly state that the authors are releasing source code for the methodology or analysis presented in *this specific paper*, nor does it provide a direct link to such code. |
| Open Datasets | No | The paper refers to 'new game levels' and 'standard game levels' within the competition context, but it does not provide concrete access information (e.g., specific link, DOI, repository name, formal citation with authors/year) for a publicly available or open dataset used for training in the traditional machine learning sense. The levels are part of the live game or competition environment. |
| Dataset Splits | No | The paper mentions 'unknown game levels' that participants 'have to solve within a given time limit' and discusses agents being 'benchmarked' and 'ranked'. However, it does not provide specific dataset split information (percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, and test sets. It describes a competition environment where agents play levels, rather than a fixed dataset split for model evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running any experiments or analysis discussed within the paper. |
| Software Dependencies | No | The paper mentions some software or platforms related to the competition itself, such as 'Box2D (box2d.org)', 'Chrome browser extension', and 'Snap! (snap.berkeley.edu)'. However, it does not provide specific version numbers for these or any other ancillary software dependencies (e.g., programming language versions, specific libraries with versions) that would be needed to replicate any analysis or methods presented in this paper. |
| Experiment Setup | No | The paper describes the general competition setup, such as agents interacting with a server via a 'fixed communication protocol' and receiving 'screenshots' and submitting 'actions'. It also mentions that participants are provided with a 'computer vision module' and a 'trajectory planning module'. However, it does not provide specific experimental setup details like concrete hyperparameter values, training configurations, or system-level settings for any models or methods developed by the authors of *this paper*. |