Dimension-independent Certified Neural Network Watermarks via Mollifier Smoothing
Authors: Jiaxiang Ren, Yang Zhou, Jiayin Jin, Lingjuan Lyu, Da Yan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluation on real datasets demonstrates the superior performance of our mollifier smoothing appproach against several state-of-the-art empirical and certified defense methods on image classification. In this section, we have evaluated the empirical and certified defense of our Mollifier Smoothing model and other comparison methods against l2, l3, and l -norm watermark removal attacks over three standard image classification datasets: MNIST (Deng, 2012), CIFAR-10 (Krizhevsky, 2009), and CIFAR-10 (Krizhevsky, 2009). |
| Researcher Affiliation | Collaboration | Jiaxiang Ren 1 Yang Zhou 1 Jiayin Jin 1 Lingjuan Lyu 2 Da Yan 3 1Auburn University, USA 2Sony AI, Japan 3University of Alabama at Birmingham, USA. Correspondence to: Yang Zhou <yangzhou@auburn.edu>. |
| Pseudocode | No | The paper presents theoretical derivations and experimental results but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | We promise to release our open-source codes on Git Hub and maintain a project website with detailed documentation for long-term access by other researchers and end-users after the paper is accepted. |
| Open Datasets | Yes | We study image classification networks on three standard image datasets: MNIST 1, CIFAR-10 2, and CIFAR-100 3. 1http://yann.lecun.com/exdb/mnist/ 2https://www.cs.toronto.edu/ kriz/cifar.html 3https://www.cs.toronto.edu/ kriz/cifar.html |
| Dataset Splits | No | The paper states 'We train the base classifiers on the training sets... and test it on the corresponding test sets.' and provides training data ratios (e.g., 'Training data ratio on MNIST 60K/10K'), but does not explicitly specify a validation set split or how it was derived for reproduction. |
| Hardware Specification | Yes | The experiments were conducted on a compute server running on Red Hat Enterprise Linux 7.2 with 2 CPUs of Intel Xeon E5-2650 v4 (at 2.66 GHz) and 8 GPUs of NVIDIA Ge Force GTX 2080 Ti (with 11GB of GDDR6 on a 352-bit memory bus and memory bandwidth in the neighborhood of 620GB/s), 256GB of RAM, and 1TB of HDD. |
| Software Dependencies | Yes | The codes were implemented in Python 3.7.3 and Py Torch 1.0.14. We also employ Numpy 1.16.4 and Scipy 1.3.0 in the implementation. |
| Experiment Setup | Yes | The neural networks are trained with Kaiming initialization (He et al., 2015) using SGD for 160 epochs with an initial learning rate of 0.1 and batch size 100. The learning rate is decayed by a factor of 0.1 at 1/2 and 3/4 of the total number of epochs. Table 33: Hyperparameter Settings |