Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners
Authors: Shike Mei, Xiaojin Zhu
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments Using the procedure developed in the previous section, we present empirical experiments on training-set attacks. |
| Researcher Affiliation | Academia | Shike Mei and Xiaojin Zhu Department of Computer Sciences, University of Wisconsin-Madison, Madison WI 53706, USA {mei, jerryzhu}@cs.wisc.edu |
| Pseudocode | No | The paper describes the general procedure for gradient descent but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | D0 is the wine quality data set (Cortez et al. 2009). D0 is the Spambase data set (Bache and Lichman 2013). D0 comes from the Wisconsin State Climatology Office, and consists of annual number of frozen days for Lake Mendota in Midwest USA from 1900 to 2000. |
| Dataset Splits | No | The paper mentions cross-validation for setting regularization parameters but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for the main experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'LIBLINEAR SVM implementation' and 'logistic regression in the LIBLINEAR package' as tools used, but does not provide specific version numbers for these or any other ancillary software components, which is required for reproducibility. |
| Experiment Setup | Yes | The regularization parameter C in the learner is set to 1... We let the attacker effort function be Eq (17) with the weight λ = 0.1. The step length αt of gradient defined in Eq (9) is set to αt = 0.5/t. ...regularization parameter C = 0.01 set separately by cross validation. ...with the weight λ = 0.01. The step length αt of gradient defined in Eq (9) is set to αt = 1/t. |