Transferable Unlearnable Examples

Authors: Jie Ren, Han Xu, Yuxuan Wan, Xingjun Ma, Lichao Sun, Jiliang Tang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the transferability of the unlearnable examples crafted by our proposed method across training settings and datasets. The implementation of our method is available at https://github.com/renjie3/TUE.
Researcher Affiliation Academia Jie Ren Michigan State University renjie3@msu.edu Han Xu Michigan State University xuhan1@msu.edu Yuxuan Wan Michigan State University wanyuxua@msu.edu Xingjun Ma Fudan University xingjunma@fudan.edu.cn Lichao Sun Lehigh University lis221@lehigh.edu Jiliang Tang Michigan State University tangjili@msu.edu
Pseudocode No The paper describes the optimization process using equations (Eq. 6) and text, but it does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The implementation of our method is available at https://github.com/renjie3/TUE.
Open Datasets Yes Datasets. The datasets include CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), which contain 50,000 training images and 10,000 test images, and SVHN (Netzer et al., 2011), which contains 73,257 training images of ten classes and 26,032 test images.
Dataset Splits Yes Datasets. The datasets include CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), which contain 50,000 training images and 10,000 test images, and SVHN (Netzer et al., 2011), which contains 73,257 training images of ten classes and 26,032 test images.
Hardware Specification No The paper mentions using a ResNet-18 model and various training parameters (epochs, PGD steps), but does not specify any hardware details like GPU models, CPU types, or memory used for experiments.
Software Dependencies No The paper mentions using specific models like Res Net-18 and methods like Sim CLR, Mo Co, Sim Siam, and PGD, but it does not specify software versions for libraries (e.g., PyTorch, TensorFlow) or programming languages.
Experiment Setup Yes All the perturbations... are generated by PGD (Madry et al., 2018) on Res Net-18 and constrained by δi ϵ with ϵ = 8/255. In the evaluation stage, we use cross-entropy loss in supervised training and linear probing after contrastive pre-training in unsupervised training. The supervised model is trained for 200 epochs. The unsupervised model is pre-trained for 1000 epochs and fine-tuned for 100 epochs. More hyperparamaters for generation and evaluation can be found in Appendix B.2. ... Table 7: Hyperparameters of evaluation stage (Epoch, Optimizer, Learning Rate, LR Scheduler, Encoder Momentum, Loss Function)