MetaFinger: Fingerprinting the Deep Neural Networks with Meta-training
Authors: Kang Yang, Run Wang, Lina Wang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our method achieves 99.34% and 97.69% query accuracy on average, surpassing existing methods over 30%, 25% on CIFAR-10 and Tiny-Image Net, respectively. |
| Researcher Affiliation | Academia | Kang Yang1,2 , Run Wang1,2 , Lina Wang1,2,3 1School of Cyber Science and Engineering, Wuhan University, China 2Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, China 3Zhengzhou Xinda Institute of Advanced Technology |
| Pseudocode | Yes | Algorithm 1 Meta-training |
| Open Source Code | Yes | Our code is available at https://github.com/kangyang WHU/Meta Finger/ |
| Open Datasets | Yes | on CIFAR-10 and Tiny-Image Net benchmark datasets. |
| Dataset Splits | Yes | We first split the meta-data into train data Dtrain and validation data Dval. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries). |
| Experiment Setup | Yes | In Section 5.1, the paper describes input modification attacks including 'Random Resize and Padding (RP)', 'Input Noising' (Gaussian noise and universal noise), and 'Input Smoothing' (mean, median, Gaussian kernel). It also details model modification attacks with 'Fine-tuning' (FTLL, FTAL, RTLL, RTAL modes), 'Weight pruning' (p% from 10% to 70%), and 'Weight Noising' (with α values). Algorithm 1 also lists 'Learning Rate η, Train Epoch Nepoch, Loss Control λ, Number of Sample k' as inputs for meta-training. |