SAME: Sample Reconstruction against Model Extraction Attacks

Authors: Yi Xie, Jie Zhang, Shiqian Zhao, Tianwei Zhang, Xiaofeng Chen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments corroborate the superior efficacy of SAME over state-of-the-art solutions. Our code is available at https://github.com/xythink/SAME.
Researcher Affiliation Academia Yi Xie1*, Jie Zhang2, Shiqian Zhao2, Tianwei Zhang2, Xiaofeng Chen1 1Xidian University, China 2Nanyang Technological University, Singapore
Pseudocode No The paper describes the workflow of SAME and its components (Masked Autoencoder, Auxiliary Model, Anomaly Score Calculation) with equations and descriptive text, but it does not provide any formal pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/xythink/SAME.
Open Datasets Yes We evaluate our scheme on two groups of datasets: 1) MNIST (Le Cun et al. 1998) and EMNIST-digits (Cohen et al. 2017); 2) CIFAR10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009).
Dataset Splits No The paper mentions training on the "victim training set" and evaluating on a "test set" for performance metrics, but it does not specify any explicit validation splits (e.g., percentages or counts for a validation set) that would be needed for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper mentions using a "MAE model based on the Vi T-Tiny encoder" and various attack methods like "Knockoff Nets", "Jacobian-Based Dataset Augmentation", and "Data-Free Model Extraction", but it does not specify any software dependencies with version numbers (e.g., Python version, specific deep learning framework versions like PyTorch or TensorFlow, or library versions).
Experiment Setup Yes In all experiments, we use a MAE model based on the Vi T-Tiny encoder (Dosovitskiy et al. 2020), trained for 500 epochs on the victim training set. ... We utilize a seed dataset comprising 200 images, with a perturbation step size λ set to 0.1. ... α is a hyperparameter to balance the two score items.