Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Authors: Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein

NeurIPS 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
Researcher Affiliation Academia Ali Shafahi University of Maryland EMAIL W. Ronny Huang University of Maryland EMAIL Mahyar Najibi University of Maryland EMAIL Octavian Suciu University of Maryland EMAIL Christoph Studer Cornell University EMAIL Tudor Dumitras University of Maryland EMAIL Tom Goldstein University of Maryland EMAIL
Pseudocode Yes Algorithm 1 Poisoning Example Generation
Open Source Code Yes 2The code is available at https://github.com/ashafahi/inceptionv3-transferLearn-poison
Open Datasets Yes We used the Adam optimizer with learning rate of 0.01 to train the network for 100 epochs.
Dataset Splits No The paper provides details on training and test sets but does not explicitly mention a separate validation set split or its details.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not specify any particular software versions or dependencies required to replicate the experiment.
Experiment Setup Yes We use the Adam optimizer with learning rate of 0.01 to train the network for 100 epochs.