Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack

Authors: Tian Yu Liu, Yu Yang, Baharan Mirzasoleiman

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we show that our light-weight method renders state-of-the-art visually imperceptible poisoning attacks, including Gradient Matching [10], Bullseye Polytope [2], Feature Collision [32], and Sleeper Agent [34] ineffective, with only a slight decrease in the performance.
Researcher Affiliation Academia Tian Yu Liu Department of Computer Science University of California, Los Angeles tianyu@cs.ucla.edu Yu Yang Department of Computer Science University of California, Los Angeles yuyang@cs.ucla.edu Baharan Mirzasoleiman Department of Computer Science University of California, Los Angeles baharan@cs.ucla.edu
Pseudocode Yes The pseudocode can be found in Alg.1.
Open Source Code Yes Our code can be found at https://github.com/tianyu139/friendly-noise
Open Datasets Yes Following the works of [10, 11, 31], we evaluate our method primarily on CIFAR-10, Res Net-18.
Dataset Splits No The paper mentions training on CIFAR-10 and evaluating test accuracy, but does not explicitly specify the training/validation/test dataset splits by percentage, sample counts, or detailed splitting methodology.
Hardware Specification Yes We run all experiments and timings on an NVIDIA A40 GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as deep learning frameworks or libraries.
Experiment Setup Yes For all models trained from scratch, we use a learning rate starting at 0.1 and decaying by a factor of 10 at epochs 30, 50, and 70. ... When applying our method, we clamp the generated friendly perturbations using ζ = 16, and add bounded random noise. For the random noise component, we set µ = 16 in our experiments. We optimize friendly perturbations using SGD with momentum 0.9 and Nesterov acceleration, perform a hyperparameter search along LR= {10, 20, 50, 100} and λ = {1, 10}, and optimize each batch of 128 samples for 20 epochs.