Characteristic Examples: High-Robustness, Low-Transferability Fingerprinting of Neural Networks

Authors: Siyue Wang, Xiao Wang, Pin-Yu Chen, Pu Zhao, Xue Lin

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed characteristic examples can achieve superior performance when compared with existing fingerprinting methods. In particular, for VGG Image Net models, using LTRC-examples gives 4 higher uniqueness score than the baseline method and does not incur any false positives.
Researcher Affiliation Collaboration Siyue Wang1 , Xiao Wang2 , Pin-Yu Chen3 , Pu Zhao1 and Xue Lin1 1Northeastern University 2Boston University 3IBM Research {wang.siy, zhao.pu, xue.lin}@northeastern.edu, kxw@bu.edu, pin-yu.chen@ibm.com
Pseudocode No The paper describes the generation process using mathematical equations (e.g., Eq. 2, 3, 4) but does not include a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to a code repository for the methodology.
Open Datasets Yes We adopt the widely used public image datasets and models in the literature, including CNN model for CIFAR-10 [Krizhevsky and others, 2009] and VGG-16 [Simonyan and Zisserman, 2015] model for Image Net [Deng et al., 2009] datasets, respectively.
Dataset Splits No The paper does not explicitly state the training, validation, and test dataset splits for the models used. It mentions using 'public image datasets' and refers to the 'test set' for evaluating the base models, but no specific split percentages or sample counts for training/validation/test data are provided for reproducibility of the *model training*.
Hardware Specification Yes The experiments are conducted on machines with 8 NVIDIA GTX 1080 TI GPUs.
Software Dependencies No The paper mentions general model architectures and algorithms but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes To control the trade-off between robustness and transferability, we set the weight perturbation bound δ to 0.001, 0.003, 0.005, 0.007 separately for Image Net dataset and 0.01, 0.03, 0.05, 0.07 for CIFAR-10 dataset. For each C-examples generation method, 100 C-examples are generated (with randomly picked target labels) with a total of 500 iteration steps (i.e., t = 0, 1, ..., 499 as in Eq. (2)). When computing input gradient, we sample input gradients for q = 10 times and use the mean of gradients in each iteration step of generating RC-examples.