Behavioral Learning in Security Games: Threat of Multi-Step Manipulative Attacks

Authors: Thanh H. Nguyen, Arunesh Sinha

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we present extensive experimental results on the impact of such misleading attacks, showing a significant benefit for the attacker and loss for the defender.
Researcher Affiliation Academia Thanh H. Nguyen1, Arunesh Sinha2 1University of Oregon 2Rutgers University thanhhng@cs.uoregon.edu, arunesh.sinha@rutgers.edu
Pseudocode Yes Algorithm 1: Compute the gradient dxt/dθt
Open Source Code No The paper does not contain any explicit statements indicating that the source code for the methodology described will be made publicly available, nor does it provide a link to a code repository.
Open Datasets No The paper describes generating games for its experiments using Gambit (Mc Kelvey, Mc Lennan, and Turocy 2016) based on random payoffs, rather than utilizing a pre-existing, publicly available dataset with concrete access information such as a link or repository.
Dataset Splits No The paper mentions 'train', 'validation', and 'test' in the context of the defender's internal learning process (i.e., how the defender would learn the attacker's behavior). However, it does not specify any dataset splits (e.g., percentages, sample counts) for its own experimental evaluation of the proposed method.
Hardware Specification Yes Our experiments are conducted on a High Performance Computing (HPC) cluster, with dual E5-2690v4 (28 cores) processors and 128 GB memory.
Software Dependencies No The paper states 'We use Matlab to implement our algorithms' and mentions 'Gambit', but it does not specify version numbers for either software, which is necessary for reproducible software dependency information.
Experiment Setup Yes In our games, the maximum number of attacks at each time step is limited to K = 50. Each of data points is averaged over 60 games.