Attacking Data Transforming Learners at Training Time

Authors: Scott Alfeld, Ara Vartanian, Lucas Newman-Johnson, Benjamin I.P. Rubinstein3167-3174

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We focus our empirical experiments on the setting where Bob learns an autoregressive model. ... Experiments were coded using Num Py (Oliphant 2006) and run on the Google Compute Engine platform. Results are shown in Table 1.
Researcher Affiliation Academia 1Department of Computer Science, Amherst College 2Department of Computer Sciences, University of Wisconsin Madison 3School of Computing and Information Systems, University of Melbourne
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is open source or available for download.
Open Datasets No The paper uses 'synthetic data' generated by the authors, stating: 'To generate the synthetic series of daily values we generated 100 models of degree d = 7 (one week) with the following sampling procedure: We sampled model θ from N(0,I d) and tested it for stationarity.'
Dataset Splits No The paper generates synthetic data and describes the process by which Bob trains his model. However, it does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for the experimental evaluation of Alice's attacks.
Hardware Specification No The paper states, 'Experiments were coded using Num Py (Oliphant 2006) and run on the Google Compute Engine platform.' This mentions a platform but lacks specific hardware details like exact CPU/GPU models or memory amounts.
Software Dependencies No The paper mentions 'Experiments were coded using Num Py (Oliphant 2006)' but does not provide specific version numbers for Num Py or any other ancillary software components.
Experiment Setup Yes Each attacker ran projected gradient descent with step size η = .1 and terminated when the greatest (absolute) difference between Alice s loss on the current iteration and any of the past 10 iterations was less than 1/1000.