InverseNet: Augmenting Model Extraction Attacks with Training Data Inversion

Authors: Xueluan Gong, Yanjiao Chen, Wenbin Yang, Guanghao Mei, Qian Wang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three simulated victim models and Alibaba Cloud s commercially-available API demonstrate that INVERSENET yields a model with significantly greater functional similarity to the victim model than the current state-of-the-art attacks at a substantially lower query budget.
Researcher Affiliation Academia 1Wuhan University, China 2Zhejiang University, China
Pseudocode No The paper describes its methods in narrative text and uses mathematical equations (e.g., Equation 1, Equation 2, Equation 3), but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about the release of its source code, nor does it provide a link to a code repository.
Open Datasets Yes The simulated victim models are trained on three datasets: MNIST, GTSRB, and CIFAR10. [...] MNIST. We randomly choose 60,000 samples as the training set, and 10,000 samples as the test set.
Dataset Splits No The paper explicitly mentions 'training set' and 'test set' sizes and divisions for MNIST, GTSRB, and CIFAR10, but it does not specify a separate 'validation' set or its split.
Hardware Specification No All experiments were conducted on an Ubuntu 16.04 system with a 8-core Intel CPU and NVIDIA GPU.
Software Dependencies No The paper mentions software like 'Caffe Model Zoo' in the context of related work but does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, frameworks) used in its own experimental setup.
Experiment Setup Yes In our experiments, the ratio between K1, K2, and K3 is fixed at 0.45:0.45:0.1. [...] In our experiments, ξ was set to 0.02 [Moosavi-Dezfooli et al., 2016].