Defending against Data-Free Model Extraction by Distributionally Robust Defensive Training
Authors: Zhenyi Wang, Li Shen, Tongliang Liu, Tiehang Duan, Yanjun Zhu, Donglin Zhan, DAVID DOERMANN, Mingchen Gao
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on defending against both decision-based and scorebased DFME demonstrate that Me Co can significantly reduce the effectiveness of existing DFME methods and substantially improve running efficiency. |
| Researcher Affiliation | Collaboration | 1University of Maryland, College Park, USA 2JD Explore Academy, China 3The University of Sydney, Australia 4West Virginia University, USA 5Northeastern University, USA 6Columbia University, USA 7University at Buffalo, USA |
| Pseudocode | Yes | Algorithm 1 Me Co Training. |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about the release of its source code. |
| Open Datasets | Yes | Datasets. We perform experiments on four standard datasets used in DFME literature, including MNIST, CIFAR10, CIFAR100 [25] and Mini Image Net [52] (100 classes). |
| Dataset Splits | Yes | Datasets. We perform experiments on four standard datasets used in DFME literature, including MNIST, CIFAR10, CIFAR100 [25] and Mini Image Net [52] (100 classes). |
| Hardware Specification | Yes | We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the A5000 used for this research. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | For decision-based DFME methods, following [47], we use a query budget of 10M for CIFAR100 and 8M for CIFAR-10. For score-based DFME methods, following [51], we set the number of queries to be 2M for MNIST, 20M for CIFAR10, and 200M for CIFAR100, respectively. We perform each experiment for 5 runs with a mean and standard deviation of results. We provide more implementation details in Appendix 7. |