Trapdoor Normalization with Irreversible Ownership Verification

Authors: Hanwen Liu, Zhenyu Weng, Yuesheng Zhu, Yadong Mu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed method is not only superior to previous state-of-the-art methods in robustness, but also has better efficiency.
Researcher Affiliation Academia 1Wangxuan Institute of Computer Technology, Peking University 2School of Electronic and Computer Engineering, Peking University.
Pseudocode Yes Algorithm 1 Embedding Trapdoor Normalization (Td N) Parameter: dataset D, passport p and deep model parameters θ.
Open Source Code No The paper states "We used Py Torch (Paszke et al., 2019) to implement our proposed Td N." but does not provide a link to the source code or an explicit statement about its public release.
Open Datasets Yes Following common settings (Fan et al., 2019; Zhang et al., 2020), we include empirical results for deep models trained on CIFAR-10, CIFAR-100 (Krizhevsky, 2009), Caltech-101 and Caltech-256 (Fei-Fei et al., 2006) for image classification tasks. For these datasets, we conduct experiments using Alex Net (Krizhevsky et al., 2012), VGG-11 (Simonyan & Zisserman, 2015) and Res Net-18 (He et al., 2016) with Batch Norm (Ioffe & Szegedy, 2015) and Group Norm (Wu & He, 2018). To demonstrate that our proposed Td N can also be applied in deep nets other than vision models, we also use GIN (Xu et al., 2019) with Batch Norm on social network datasets (including IMDB-Binary, IMDB-Multi, and COLLAB) and bioinformatics datasets (including MUTAG) (Yanardag & Vishwanathan, 2015) for graph classification tasks.
Dataset Splits No The paper specifies training and test sets ("The choices of batch size are set as 64 and 128 for the training set and the test set, respectively."), but does not explicitly mention a separate validation dataset split or provide details on how validation data was used for hyperparameter tuning or early stopping.
Hardware Specification Yes Most experiments were conducted using NVIDIA Ge Force RTX 2080 Ti (11GB).
Software Dependencies No The paper states "We used Py Torch (Paszke et al., 2019) to implement our proposed Td N." but does not provide a specific version number for PyTorch or mention any other software dependencies with version numbers.
Experiment Setup Yes All models are trained for 200 epochs by default, with the multi-step learning rate scheduled from 0.01 to 0.0001. The choices of batch size are set as 64 and 128 for the training set and the test set, respectively. Every model is initialized from the pre-trained weights on a different dataset, and they are trained for 100 epochs with a smaller learning rate of 0.001. For the hyper-parameters of knowledge distillation, we set the coefficient of distillation term as 0.95, the learning rate as 0.01, and the number of epochs as 200.