Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attack
Authors: Tian Yu Liu, Yu Yang, Baharan Mirzasoleiman
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we show that our light-weight method renders state-of-the-art visually imperceptible poisoning attacks, including Gradient Matching [10], Bullseye Polytope [2], Feature Collision [32], and Sleeper Agent [34] ineffective, with only a slight decrease in the performance. |
| Researcher Affiliation | Academia | Tian Yu Liu Department of Computer Science University of California, Los Angeles EMAIL Yu Yang Department of Computer Science University of California, Los Angeles EMAIL Baharan Mirzasoleiman Department of Computer Science University of California, Los Angeles EMAIL |
| Pseudocode | Yes | The pseudocode can be found in Alg.1. |
| Open Source Code | Yes | Our code can be found at https://github.com/tianyu139/friendly-noise |
| Open Datasets | Yes | Following the works of [10, 11, 31], we evaluate our method primarily on CIFAR-10, Res Net-18. |
| Dataset Splits | No | The paper mentions training on CIFAR-10 and evaluating test accuracy, but does not explicitly specify the training/validation/test dataset splits by percentage, sample counts, or detailed splitting methodology. |
| Hardware Specification | Yes | We run all experiments and timings on an NVIDIA A40 GPU. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as deep learning frameworks or libraries. |
| Experiment Setup | Yes | For all models trained from scratch, we use a learning rate starting at 0.1 and decaying by a factor of 10 at epochs 30, 50, and 70. ... When applying our method, we clamp the generated friendly perturbations using ΞΆ = 16, and add bounded random noise. For the random noise component, we set Β΅ = 16 in our experiments. We optimize friendly perturbations using SGD with momentum 0.9 and Nesterov acceleration, perform a hyperparameter search along LR= {10, 20, 50, 100} and Ξ» = {1, 10}, and optimize each batch of 128 samples for 20 epochs. |