Watermarking for Out-of-distribution Detection

Authors: Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the effectiveness of watermarking, demonstrating the significance of the reprogramming property of deep models in OOD detection.In this section, we conduct extensive experiments for watermarking in OOD detection. Specifically, we demonstrate the effectiveness of our method on a wide range of OOD evaluation benchmarks; we conduct experiments for the important hyper-parameters in our learning framework; and we provide further experiments for an improved interpretation of our proposal.
Researcher Affiliation Academia 1Department of Computer Science, Hong Kong Baptist University 2School of Mathematics and Statistics, The University of Melbourne 3School of Computer Science, The University of Sydney 4PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Mo E 5Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology 6TML Lab, The University of Sydney
Pseudocode No The paper describes the 'Overall Algorithm' in a textual format with three stages ('Negative sampling', 'Risk calculating', 'Watermark updating'), but does not present it as structured pseudocode or a clearly labeled algorithm block.
Open Source Code Yes The code is publicly available at: github.com/qizhouwang/watermarking.
Open Datasets Yes We use CIFAR-10, CIFAR-100 [26], and Image Net [42] datasets as three ID datasets, with data pre-processing including horizontal flip and normalization.
Dataset Splits No The paper discusses training and testing, but does not explicitly specify a validation dataset split (e.g., percentages or counts) or its explicit use for hyperparameter tuning.
Hardware Specification Yes All the methods are realized by Pytorch 1.81 with CUDA 11.1, where we use several machines equipped with Ge Force RTX 3090 GPUs and AMD Ryzen Threadripper 3960X Processors.
Software Dependencies Yes All the methods are realized by Pytorch 1.81 with CUDA 11.1, where we use several machines equipped with Ge Force RTX 3090 GPUs and AMD Ryzen Threadripper 3960X Processors.
Experiment Setup Yes For the CIFAR benchmarks, the models are trained for 200 epochs via the stochastic gradient descent, with the batch size 64, the momentum 0.9, and the initial learning rate 0.1. The learning rate is divided by 10 after 100 and 150 epochs.