Explicit Defense Actions Against Test-Set Attacks
Authors: Scott Alfeld, Xiaojin Zhu, Paul Barford
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using these methods, we perform an empirical investigation of optimal defense actions for a particular class of linear models autoregressive forecasters and find that for ten real world futures markets, the optimal defense action reduces the Bob s loss by between 78 and 97%. |
| Researcher Affiliation | Collaboration | Scott Alfeld, Xiaojin Zhu, Paul Barford Department of Computer Sciences University of Wisconsin Madison Madison WI 53706, USA com Score, Inc. 11950 Democracy Drive, Suite 600 Reston, VA 20190, USA. |
| Pseudocode | No | The paper does not contain explicitly labeled |
| Open Source Code | No | The paper does not provide an explicit statement about open-source code availability or a link to a repository. |
| Open Datasets | Yes | Data is freely available from www.quandl.com. Identification codes for individual datasets are provided in Figure 1. |
| Dataset Splits | No | The paper does not explicitly state training/validation/test dataset splits. It mentions |
| Hardware Specification | No | The paper mentions that |
| Software Dependencies | Yes | All figures were made with Matplotlib (Hunter 2007) v 1.5.1. |
| Experiment Setup | No | The paper mentions specific experimental settings like |