Data-Free Hard-Label Robustness Stealing Attack
Authors: Xiaojian Yuan, Kejiang Chen, Wen Huang, Jie Zhang, Weiming Zhang, Nenghai Yu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate the effectiveness of our method. The clone model achieves a clean accuracy of 77.86% and a robust accuracy of 39.51% against Auto Attack, which are only 4.71% and 8.40% lower than the target model on the CIFAR-10 dataset, significantly exceeding the baselines. |
| Researcher Affiliation | Academia | 1University of Science and Technology of China 2Nanyang Technological University |
| Pseudocode | Yes | The framework is illustrated in Fig. 2 and the training algorithm is in the appendix. |
| Open Source Code | Yes | Our code is available at: https://github.com/Lethe Sec/DFHL-RS-Attack. |
| Open Datasets | Yes | We consider two benchmark datasets commonly used in AT research (Madry et al. 2018; Zhang et al. 2019; Li et al. 2023b), CIFAR-10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009), as target datasets. |
| Dataset Splits | No | The paper describes training on datasets and reports test performance, but it does not explicitly specify validation dataset splits (e.g., percentages or counts for a separate validation set). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions optimizers like Adam and SGD but does not specify software versions for libraries (e.g., PyTorch version) or other dependencies. |
| Experiment Setup | Yes | For substitute data generation, we use Adam optimizer with β = (0.5, 0.999) and set the hyperparameter λ = 3 in Eq. (7). For CIFAR-10, we set learning rates ηG = 0.002, ηz = 0.01, number of iterations NG = 10 and the label smoothing factor is set to 0.2. For CIFAR-100, we set learning rates ηG = 0.005, ηz = 0.015, number of iterations NG = 15 and the label smoothing factor is set to 0.02. For training MC, we use SGD optimizer with an initial learning rate of 0.1, a momentum of 0.9 and a weight decay of 1e 4. For constructing HEE, the step size α in Eq. (3) is set to 0.03 and the number of iterations is set to 10. We set the iterations of the clone model NC = 500. The batch sizes for CIFAR-10 and CIFAR100 are set to B = 256 and B = 512, respectively. We apply a cosine decay learning rate schedule and the training epoch is E = 300. |