Deception in Finitely Repeated Security Games

Authors: Thanh H. Nguyen, Yongzhao Wang, Arunesh Sinha, Michael P. Wellman2133-2140

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Computational experiments illuminate conditions conducive to strategic deception, and quantify benefits to the attacker. Finally, we present a detailed experimental analysis of strategic deception, showing how various game factors affect the tendency for the attacker to deviate from myopic best responses to mislead the defender.
Researcher Affiliation Academia 1University of Oregon, thanhhng@cs.uoregon.edu 2University of Michigan, {wangyzh,arunesh,wellman}@umich.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No In our experiments, the players rewards and penalties are generated uniformly at random in the range [1, 10] and [ -10, -1] respectively.
Dataset Splits No The paper describes generating game instances randomly for experiments but does not provide specific dataset split information (e.g., train/validation/test percentages or counts) for reproduction.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes In our experiments, the players rewards and penalties are generated uniformly at random in the range [1, 10] and [ -10, -1] respectively. We analyze games with number of attacker types: |Λ| = 2, number of targets: |N| {4, 6, 8, 10, 12}, and number of time steps |T| {2, 3}. In our third experiment, we vary the number of defender resources in 2-step games with 2 attacker types.