Regret-Based Optimization and Preference Elicitation for Stackelberg Security Games with Uncertainty
Authors: Thanh Nguyen, Amulya Yadav, Bo An, Milind Tambe, Craig Boutilier
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results validate the effectiveness of our approaches. |
| Researcher Affiliation | Academia | 1University of Southern California, Los Angeles, CA 90089 {thanhhng, amulyaya, tambe}@usc.edu 2Nanyang Technological University, Singapore 639798 boan@ntu.edu.sg 3University of Toronto, Canada M5S 3H5 cebly@cs.toronto.edu |
| Pseudocode | Yes | Algorithm 1: Constraint-generation (MIRAGE) |
| Open Source Code | No | The paper does not provide concrete access to its source code, nor does it explicitly state that the code is available. |
| Open Datasets | No | The paper states that games were 'generated using GAMUT' and describes the generation process, but does not refer to or provide access information for a publicly available or open dataset in the traditional sense of a pre-existing collection of data. |
| Dataset Splits | No | The paper describes how game instances were randomly generated and evaluated, but it does not specify explicit training, validation, or test dataset splits. |
| Hardware Specification | Yes | All experiments were run on a 2.83GHz Intel processor with 4GB of RAM |
| Software Dependencies | Yes | using CPLEX 12.3 for LP/MILPs and KNITRO 8.0.0.z for nonlinear optimization. |
| Experiment Setup | Yes | Upper and lower bounds for payoff intervals are generated randomly from [ 14, 1] for penalties and [1, 14] for rewards, with the difference between the upper and lower bound (i.e., interval size) exactly 2 (this gives payoff uncertainty of roughly 30%). All results are averaged over 120 instances (20 games per covariance value) and use eight defender resources unless otherwise speciļ¬ed. |