Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Authors: Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
Researcher Affiliation Academia Ali Shafahi University of Maryland ashafahi@cs.umd.edu W. Ronny Huang University of Maryland wrhuang@umd.edu Mahyar Najibi University of Maryland najibi@cs.umd.edu Octavian Suciu University of Maryland osuciu@umiacs.umd.edu Christoph Studer Cornell University studer@cornell.edu Tudor Dumitras University of Maryland tudor@umiacs.umd.edu Tom Goldstein University of Maryland tomg@cs.umd.edu
Pseudocode Yes Algorithm 1 Poisoning Example Generation
Open Source Code Yes 2The code is available at https://github.com/ashafahi/inceptionv3-transferLearn-poison
Open Datasets Yes We used the Adam optimizer with learning rate of 0.01 to train the network for 100 epochs.
Dataset Splits No The paper provides details on training and test sets but does not explicitly mention a separate validation set split or its details.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not specify any particular software versions or dependencies required to replicate the experiment.
Experiment Setup Yes We use the Adam optimizer with learning rate of 0.01 to train the network for 100 epochs.