Sharpness-Aware Data Poisoning Attack

Authors: Pengfei He, Han Xu, Jie Ren, Yingqian Cui, Shenglai Zeng, Hui Liu, Charu C. Aggarwal, Jiliang Tang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that SAPA offers a general and principled strategy that significantly enhances numerous poisoning attacks against various types of re-training uncertainty.
Researcher Affiliation Collaboration 1Department of Computer Science and Engineering, Michigan State University 2IBM T. J. Watson Research Center, New York
Pseudocode Yes Algorithm 1: Error-min+SAPA
Open Source Code Yes Code is available in https://github.com/Pengfei He Power/SAPA
Open Datasets Yes Through this section, we focus on image classification tasks on benchmark datasets CIFAR10 and CIFAR100. Meanwhile, we provide additional empirical results of the dataset SVHN in Appendix D.
Dataset Splits No The paper mentions training and testing sets (e.g., 'perturbed training set', 'clean test dataset') but does not explicitly provide details about a validation dataset split.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments.
Software Dependencies No The paper mentions optimizers (SGD, ADAM) and model architectures (ResNet, VGG, MobileNet, ViT) but does not provide specific software library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes the model is randomly initialized and re-trained from scratch, via SGD for 160 epochs with an initial learning rate of 0.1 and decay by 0.1 at epochs 80 and 120.