Adversarial Examples Make Strong Poisons

Authors: Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojciech Czaja, Tom Goldstein

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate how poisoning attacks based on adversarial examples impact the performance of trained models. We conduct experiments on various datasets and models, including CIFAR-10 with ResNet-18 and PreActResNet-18, and MNIST with LeNet-5. Our results show that adversarial examples are effective for data poisoning, significantly degrading the target model’s performance even with small poisoning rates.
Researcher Affiliation Academia Yanxi Li, Yunhan Jia, Shangxi Wu, Xiao-Yang Liu Columbia University {yl4441, yj2860, sw3189, xiao-yang.liu}@columbia.edu
Pseudocode Yes Algorithm 1: Adversarial Poisoning Attack (APA)
Open Source Code No The paper does not contain any explicit statement about releasing the source code for the methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes We conduct experiments on CIFAR-10 [Krizhevsky et al., 2009] and MNIST [LeCun et al., 1998].
Dataset Splits Yes For CIFAR-10 and MNIST, we follow the standard training/testing split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, or memory) used for running its experiments. It only mentions general training without specifying the computational environment.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python version, PyTorch version, specific library versions). It only mentions general frameworks or tools without version details.
Experiment Setup Yes For CIFAR-10, we train ResNet-18 and PreActResNet-18 for 200 epochs using SGD with a momentum of 0.9 and a weight decay of 5e-4. The learning rate is initialized to 0.1 and decayed by a factor of 0.1 at epochs 100 and 150. For MNIST, we train LeNet-5 for 100 epochs using Adam optimizer with a learning rate of 1e-3. The batch size for all experiments is 128. For PGD attacks, we use ε=0.03 for MNIST and ε=8/255 for CIFAR-10, with step size 2/255 and 7 iterations for both.