Watermarking Deep Neural Networks with Greedy Residuals
Authors: Hanwen Liu, Zhenyu Weng, Yuesheng Zhu
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiments show that our method outperforms previous state-of-the-art methods in five tasks. |
| Researcher Affiliation | Academia | Hanwen Liu 1 Zhenyu Weng 1 Yuesheng Zhu 1 1School of Electronic and Computer Engineering, Peking University. Correspondence to: Yuesheng Zhu <zhuys@pku.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Embedding of Greedy Residuals Parameter: public-key cryptography key set κ = {κs, κp}, message to be signed msg, dataset D and predefined model Fθ to be watermarked with all the hyperparameters. |
| Open Source Code | Yes | The source codes of greedy residuals and the corresponding datasets are publicly available2. 2https://github.com/eil/greedy-residuals |
| Open Datasets | Yes | We run experiments of Alex Net (Krizhevsky et al., 2012) and Res Net-18 (He et al., 2016) on Caltech-101, Caltech-256 (Li et al., 2006), CIFAR-10 and CIFAR-100 (Krizhevsky, 2012) for image classification tasks. Also, we run Text CNN (Kim, 2014) and LSTM (Hochreiter & Schmidhuber, 1997) on IMDB-2 (Maas et al., 2011) and TREC-6 (Li & Roth, 2002) for sentiment and question classification respectively. We also evaluate our proposed method on Image Net (Russakovsky et al., 2015) subset1 for image classification. |
| Dataset Splits | No | The batch size of the training set is set as 64, and the batch size of the test set for image classification tasks is 128, and for text classification tasks is 64. |
| Hardware Specification | No | The paper mentions the need for 'computing resources' but does not provide specific details about the hardware used for experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions using the RSA algorithm but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x) that were used for implementation or experiments. |
| Experiment Setup | Yes | We train the networks for 200 epochs with the multi-step learning rate which schedules the learning rate as 0.01, 0.001 and 0.0001 between epochs 1 to 100, 101 to 150 and 151 to 200 respectively. We also use the weight decay and the momentum which are 5 10 4 and 0.9 respectively. The batch size of the training set is set as 64, and the batch size of the test set for image classification tasks is 128, and for text classification tasks is 64. |