Efficient Availability Attacks against Supervised and Contrastive Learning Simultaneously

Authors: Yihan Wang, Yifan Zhu, Xiao-Shan Gao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On experimental side, we evaluate the standard supervised learning algorithm and four representative contrastive learning algorithms, Sim CLR [5], Mo Co [7], BYOL [15] and Sim Siam [6]. Our proposed AUE and AAP attacks achieve the state-of-the-art worst-case unlearnability on CIFAR-10/100 and Tiny/Mini-Image Net datasets (see Section 5.2).
Researcher Affiliation Academia Yihan Wang, Yifan Zhu, Xiao-Shan Gao Academy of Mathematics and Systems Science, Chinese Academy of Sciences University of Chinese Academy of Sciences {yihanwang, zhuyifan}@amss.ac.cn, xgao@mmrc.iss.ac.cn
Pseudocode Yes Algorithm 1 Augmented Unlearnable Examples (AUE) and Algorithm 2 Augmented Adversarial Poisoning (AAP)
Open Source Code Yes The code is available at https://github. com/Ehan W/AUE-AAP.
Open Datasets Yes We conduct experiments on CIFAR-10/100 [26], Tiny-Image Net [27], modified Mini-Image Net [49], and Image Net-100 [38].
Dataset Splits No The paper describes training and test sets for CIFAR-10/100, Tiny-Image Net, and Mini-Image Net, but does not explicitly provide details for a separate validation dataset split.
Hardware Specification Yes For CIFAR-10/100, Tiny/Mini Image Net, experiments are conducted using a single NVIDIA Ge Force RTX 3090 GPU. For Image Net-100, experiments are conducted using a single NVIDIA A800 GPU.
Software Dependencies No We leverage differentiable augmentation modules in Kornia2 [37] which is a differentiable computer vision library for Py Torch. (No version numbers provided for Kornia or PyTorch).
Experiment Setup Yes AUE. We train the reference model for T = 60 epochs with SGD optimizer and cosine annealing learning rate scheduler. The batch size of training data is 128. The initial learning rate αθ is 0.1, weight decay is 10-4 and momentum is 0.9. In each epoch, we update the model for Tθ = 391 iterations and update poisons for Tδ = 391 iterations. ... The PGD process for noise generation takes Tp = 5 steps with step size αδ = 0.8/255. The augmentation strength s = 0.6 for CIFAR-10 and s = 1.0 for CIFAR-100, Tiny-Image Net, Mini Image Net, and Image Net-100.