End-to-End Game-Focused Learning of Adversary Behavior in Security Games

Authors: Andrew Perrault, Bryan Wilder, Eric Ewing, Aditya Mate, Bistra Dilkina, Milind Tambe1378-1386

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our approach on a combination of synthetic and human subject data and show that game-focused learning outperforms a two-stage approach in settings where the amount of data available is small and when there is wide variation in the adversary s values for the targets.
Researcher Affiliation Academia 1Center for Research on Computation and Society, Harvard 2Center for Artificial Intelligence in Society, University of Southern California
Pseudocode No The paper describes its approach and flow with diagrams but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or providing a link to it.
Open Datasets Yes We use data from human subject experiments performed by Nguyen et al. (2013).
Dataset Splits Yes Game-tuned two-stage (2S-GT) is a regularized approach that aims to maximize the defender s expected utility when the amount of data is small. It uses Dropout (Srivastava et al. 2014) and a validation set for early stopping.
Hardware Specification No The paper does not specify any hardware details such as GPU/CPU models, memory, or specific computing environments used for experiments.
Software Dependencies No The paper mentions implementing neural networks and using gradient descent, but does not provide specific software dependencies or version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes Unless it is varied in an experiment, the parameters are: 1. Number of targets = |T | {8, 24}. 2. Features per target = |y|/|T | = 100. 3. Number of training games = |Dtrain| = 50. ... 6. We fix the attacker s weight on defender coverage to be w = 4 (see Eq. 2)...