Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Three Strategies to Success: Learning Adversary Models in Security Games
Authors: Nika Haghtalab, Fei Fang, Thanh H. Nguyen, Arunesh Sinha, Ariel D. Procaccia, Milind Tambe
IJCAI 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also validate our approach using experiments on real and synthetic data. Finally, we conduct experiments to validate our approach. We show, via experiments on synthetic data, that a realistic number of samples for each of three strategies suffices to accurately learn the adversary model under generalized SUQR. We also show, using experiments on human subject data, that our main theoretical result provides guidance on selecting strategies to use for learning. |
| Researcher Affiliation | Academia | Nika Haghtalab Carnegie Mellon U. EMAIL Fei Fang U. Southern California EMAIL Thanh H. Nguyen U. Southern California EMAIL Arunesh Sinha U. Southern California EMAIL Ariel D. Procaccia Carnegie Mellon U. EMAIL Milind Tambe U. Southern California EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide open-source code for the methodology described. |
| Open Datasets | Yes | In addition to testing on synthetic data, we tested on human subject data collected by Nguyen et al. [2013] (data set 1) and Kar et al. [2015] (data set 2). |
| Dataset Splits | No | The paper mentions generating a test set for synthetic data and selecting training sets from human subject data, but it does not specify explicit training/validation/test splits, particularly for validation sets, for either experimental setup. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | For each set of true parameter values, we generated 20 different sets of three defender strategies, each with a total coverage probability of 3 (i.e., there are three defender resources). The three defender strategies in each set are chosen to be sufficiently different such that the minimum difference in coverage probability between the strategies is at least 0.12. The attack samples are drawn from corresponding attack probability distributions with the number of samples ranging from 500 to 10000. For each payoff structure, we selected two sets of three strategies such that λ is maximized and minimized, respectively. We consider these two sets as the two different training sets for learning the utility parameters, wt and ct. |