Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning

Authors: Shaopeng Fu, Fengxiang He, Yang Liu, Li Shen, Dacheng Tao

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that the unlearnability brought by robust error-minimizing noise can effectively protect data from adversarial training in various scenarios.
Researcher Affiliation Collaboration Shaopeng Fu1, Fengxiang He1, Yang Liu2, Li Shen1 & Dacheng Tao1 1JD Explore Academy, JD.com Inc, China 2Institute for AI Industry Research, Tsinghua University, China
Pseudocode Yes Algorithm 1 Training robust error-minimizing noise generator with Eq. (5)
Open Source Code Yes The code is available at https://github.com/fshp971/ robust-unlearnable-examples.
Open Datasets Yes Three benchmark computer vision datasets, CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and an Image Net subset (consists of the first 100 classes) (Russakovsky et al., 2015), are used in our experiments.
Dataset Splits No The paper extensively discusses training and testing, including
Hardware Specification Yes The experiments on CIFAR-10 and CIFAR-100 are conducted on 1 GPU (NVIDIA Tesla V100 16GB) and 10 CPU cores (Intel Xeon Processor E5-2650 v4 @ 2.20GHz). The experiments on Image Net are conducted on 4 GPU (NVIDIA Tesla V100 16GB) and 40 CPU cores (Intel Xeon Processor E5-2650 v4 @ 2.20GHz).
Software Dependencies No The paper mentions the use of
Experiment Setup Yes The detailed training settings are presented in Appendix A.3.1. For CIFAR-10 and CIFAR-100, each source model is trained with SGD for 5, 000 iterations, with a batch size of 128, a momentum factor of 0.9, a weight decay factor of 0.0005, an initial learning rate of 0.1, and a learning rate scheduler that decay the learning rate by a factor of 0.1 every 2, 000 iterations. In every experiment, the model is trained with SGD for 40, 000 iterations, with a batch size of 128, a momentum factor of 0.9, a weight decay factor of 0.0005, an initial learning rate of 0.1, and a learning rate scheduler that decays the learning rate by a factor of 0.1 every 16, 000 iterations. For CIFAR-10 and CIFAR-100, the steps number Ka and the step size αa in PGD are set as 10 and ρa/5. For the Image Net subset, the steps number Ka and the step size αa are set as 8 and ρa/4. Table 5 provides the settings of PGD parameters for noise generations.