Imitative Attacker Deception in Stackelberg Security Games

Authors: Thanh Nguyen, Haifeng Xu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments illustrate significant defender loss due to imitative attacker deception, suggesting the potential side effect of learning from the attacker. (Abstract) 5 Experiments We evaluate the solution quality of our proposed deceptive algorithm.
Researcher Affiliation Academia 1University of Oregon 2Harvard University
Pseudocode No The paper presents mathematical formulations (MINLP) but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using 'GAMUT (http://gamut.stanford.edu/)' which is a third-party tool, but does not provide any links or statements for its own source code for the methodology described.
Open Datasets No The paper states that data is 'generated... using the covariance game generator, GAMUT (http://gamut.stanford.edu/)' but does not provide concrete access (link, DOI, citation) to a publicly available or open dataset that was used for training.
Dataset Splits No The paper states 'Each data point in our results is averaged over 250 different games' but does not provide specific details on training, validation, or test dataset splits or cross-validation setup.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions the 'covariance game generator, GAMUT', but does not list any specific software or library names with version numbers that would be needed to replicate the experiment.
Experiment Setup Yes Each data point in our results is averaged over 250 different games (50 games per covariance value). Finally, we consider two scenarios: (i) small deceptive payoff space with an interval size of I = 1.0; and (ii) large space with I = 2.0.