Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Authors: Tribhuvanesh Orekondy, Bernt Schiele, Mario Fritz
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we find our approach consistently mitigate various attacks and additionally outperform baselines. |
| Researcher Affiliation | Academia | Tribhuvanesh Orekondy1, Bernt Schiele1, Mario Fritz2 1 Max Planck Institute for Informatics 2 CISPA Helmholtz Center for Information Security Saarland Informatics Campus, Germany |
| Pseudocode | Yes | We further elaborate on the solver and present a pseudocode in Appendix C. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of their proposed methodology. |
| Open Datasets | Yes | Victim Models and Datasets. We set up six victim models (see column FV in Table 1), each model trained on a popular image classification dataset. |
| Dataset Splits | No | We train and evaluate each victim model on their respective train and test sets. |
| Hardware Specification | Yes | The reported numbers were summarized over 10K unique predictions performed on an Nvidia Tesla V100. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | All models are trained using SGD (LR = 0.1) with momentum (0.5) for 30 (Le Net) or 100 epochs (VGG16), with a LR decay of 0.1 performed every 50 epochs. |