Partial Adversarial Behavior Deception in Security Games

Authors: Thanh H. Nguyen, Arunesh Sinha, He He

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct a comprehensive set of experiments, showing a significant benefit for the attacker and loss for the defender due to attacker deception. and 6 Experiments We analyze the impact of the attacker deception on: (i) the deceptive attacker s utility benefit; (ii) the defender s utility loss; and (iii) the defender s learning outcome.
Researcher Affiliation Academia Thanh H. Nguyen1 , Arunesh Sinha2 , He He1 1University of Oregon 2Singapore Management University
Pseudocode No The paper describes algorithms (GOSAQ, GAMBO) but does not provide them in a structured pseudocode block or clearly labeled algorithm figure.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets No We use the game generator, GAMUT (http://gamut.stanford/edu) to generate player payoffs. In creating the training dataset, for each game, we generate M = 5 different defense strategies uniformly at random. For each generated strategy xm, we sample 50 attacks (i.e., P i nm i = 50) for the boundedly rational attacker with respect to its λ. The paper does not provide a specific link, DOI, repository name, or formal citation for a publicly available or open dataset that was used. GAMUT is a generator, not a dataset itself.
Dataset Splits No The paper describes creating a 'training dataset' (In creating the training dataset, for each game, we generate M = 5 different defense strategies uniformly at random. For each generated strategy xm, we sample 50 attacks (i.e., P i nm i = 50) for the boundedly rational attacker with respect to its λ.), but it does not specify any training/validation/test dataset splits needed for reproducibility (e.g., percentages, absolute counts, or references to predefined splits).
Hardware Specification No The paper does not provide specific details about the hardware used to run experiments, such as CPU/GPU models, memory, or cloud instances.
Software Dependencies No The paper mentions using 'GAMUT' as a game generator but does not specify its version number or any other software libraries, frameworks, or solvers with their respective version numbers.
Experiment Setup Yes In creating the training dataset, for each game, we generate M = 5 different defense strategies uniformly at random. For each generated strategy xm, we sample 50 attacks (i.e., P i nm i = 50) for the boundedly rational attacker with respect to its λ. We plot the experiment results in three cases (the xaxis in the plotted figures): (i) varying the λ of the boundedly rational attacker; (ii) varying the percentage of deceptive attacks ( f f+1); and (iii) varying resource-target ratio ( K T ).