Asymmetric Action Abstractions for Multi-Unit Control in Adversarial Real-Time Games

Authors: Rubens Moraes, Levi Lelis

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on combat scenarios that arise in a real-time strategy game show that our search algorithms are able to substantially outperform state-of-the-art approaches.
Researcher Affiliation Academia Rubens O. Moraes, Levi H. S. Lelis Departamento de Inform atica, Universidade Federal de Vic osa, Brazil {rubens.moraes, levi.lelis}@ufv.br
Pseudocode Yes Algorithm 1 Portfolio Greedy Search
Open Source Code No The paper provides a link to Spar Craft (github.com/davechurchill/ualbertabot/tree/master/Spar Craft), which is a testbed/simulation environment used, not the open-source code for the authors' specific methodology (GAB/SAB) described in the paper.
Open Datasets No The paper uses a simulation environment (Spar Craft) for experiments, detailing combat configurations and unit types. It does not use or provide access to a pre-existing, publicly available dataset in the traditional sense, but rather defines the simulated environment's parameters.
Dataset Splits No The paper describes simulation setups and runs 'matches' to evaluate performance, but does not specify training, validation, or test dataset splits as it's a simulation-based study rather than one using a pre-collected dataset.
Hardware Specification Yes All experiments are run on 2.66 GHz CPUs.
Software Dependencies No The paper mentions using 'Spar Craft' as a testbed but does not specify any version numbers for Spar Craft or any other software dependencies/libraries used for the implementation.
Experiment Setup Yes We use P = {NOKAV, Kiter} and a time limit of 40 milliseconds for planning in all experiments. We use the Ψ function described by Churchill et al. (2012). Instead of evaluating state s directly with LTD2, our Ψ simulates the game forward from s for 100 state transition steps until reaching a state s ; we then use the LTD2-value of s as the Ψ-value of s. The game is simulated from s according to the NOKAV script for both players.