Three Strategies to Success: Learning Adversary Models in Security Games
Authors: Nika Haghtalab, Fei Fang, Thanh H. Nguyen, Arunesh Sinha, Ariel D. Procaccia, Milind Tambe
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also validate our approach using experiments on real and synthetic data. Finally, we conduct experiments to validate our approach. We show, via experiments on synthetic data, that a realistic number of samples for each of three strategies suffices to accurately learn the adversary model under generalized SUQR. We also show, using experiments on human subject data, that our main theoretical result provides guidance on selecting strategies to use for learning. |
| Researcher Affiliation | Academia | Nika Haghtalab Carnegie Mellon U. nhaghtal@cs.cmu.edu Fei Fang U. Southern California feifang@usc.edu Thanh H. Nguyen U. Southern California thanhhng@usc.edu Arunesh Sinha U. Southern California aruneshs@usc.edu Ariel D. Procaccia Carnegie Mellon U. arielpro@cs.cmu.edu Milind Tambe U. Southern California tambe@usc.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide open-source code for the methodology described. |
| Open Datasets | Yes | In addition to testing on synthetic data, we tested on human subject data collected by Nguyen et al. [2013] (data set 1) and Kar et al. [2015] (data set 2). |
| Dataset Splits | No | The paper mentions generating a test set for synthetic data and selecting training sets from human subject data, but it does not specify explicit training/validation/test splits, particularly for validation sets, for either experimental setup. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | For each set of true parameter values, we generated 20 different sets of three defender strategies, each with a total coverage probability of 3 (i.e., there are three defender resources). The three defender strategies in each set are chosen to be sufficiently different such that the minimum difference in coverage probability between the strategies is at least 0.12. The attack samples are drawn from corresponding attack probability distributions with the number of samples ranging from 500 to 10000. For each payoff structure, we selected two sets of three strategies such that λ is maximized and minimized, respectively. We consider these two sets as the two different training sets for learning the utility parameters, wt and ct. |