What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?

Authors: Fnu Suya, Xiao Zhang, Yuan Tian, David Evans

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we find the discovered learning task properties and the gained theoretical insights largely explain the drastic difference in attack performance observed for state-of-the-art indiscriminate poisoning attacks on linear models across benchmark datasets (Section 6). Figure 1 shows the highest error from the tested poisoning attacks (they perform similarly in most cases) on linear SVM.
Researcher Affiliation Academia Fnu Suya1 Xiao Zhang2 Yuan Tian3 David Evans1 1University of Virginia 2CISPA Helmholtz Center for Information Security 3University California Los Angeles suya@virginia.edu, xiao.zhang@cispa.de, yuant@ucla.edu, evans@virginia.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a concrete statement or link for open-sourcing the code for the methodology described.
Open Datasets Yes We evaluate the state-of-the-art data poisoning attacks for linear models... on benchmark datasets including different MNIST [28] digit pairs... and other benchmark datasets used in prior evaluations including Dogfish [24], Enron [36] and Adult [22, 46].
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) for train/validation/test sets.
Hardware Specification No The paper mentions that 'The poisoning attacks can also be done on a laptop, except the Influence Attack [25], whose computation can be accelerated using GPUs,' but does not provide specific hardware models or detailed specifications.
Software Dependencies No The paper references 'Scikit-learn: Machine learning in Python' [38] but does not specify a version number or other software dependencies with their versions.
Experiment Setup Yes We choose 3% as the poisoning rate following previous works [45, 25, 31, 32]. Appendix D.1 provides details on the experimental setup. ... The regularization parameter λ for training the linear models (SVM and LR) are configured as follows: λ = 0.09 for MNIST digit pairs, Adult, Dogfish, SVM for Enron; λ = 0.01 for IMDB, LR for Enron.