Resource Efficient Deep Learning Hardware Watermarks with Signature Alignment

Authors: Joseph Clements, Yingjie Lao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that by considering the target key samples when generating the hardware modifications, we can significantly increase the embedding success rate while targeting fewer functional blocks, decreasing the required hardware overhead needed to defend it.
Researcher Affiliation Collaboration Joseph Clements1,2, Yingjie Lao1,3 1Clemson University, Clemson, South Carolina, 29634 2Applied Research Associates, Albuquerque, New Mexico, 87110 3Tufts University, Medford, Massachusetts, 02155
Pseudocode Yes Algorithm 1: Signature Alignment Algorithm
Open Source Code No The paper mentions using 'Pytorch framework (Paszke et al. 2019)' and 'TIMM model library (Wightman 2019)' and refers to their sources, but does not provide open-source code for the methodology developed in this paper.
Open Datasets Yes We perform this experimental evaluation using the Res Net18 (He et al. 2016) architecture trained for Cifar10, Cifar100 (Krizhevsky, Hinton et al. 2009), and Image Net (Deng et al. 2009) as the key DNN.
Dataset Splits No The paper mentions using Cifar10, Cifar100, and ImageNet datasets but does not explicitly provide specific training/validation/test dataset split percentages or sample counts, nor does it refer to predefined splits with citations for reproducibility beyond the dataset citations themselves.
Hardware Specification No The paper describes the target hardware *designs* for watermarking (e.g., 'grid of 128 x 128 ALUs', 'Tiny TPU', 'custom MMU architecture') but does not specify the actual computing hardware (e.g., specific GPU or CPU models) used to run the experiments or simulations themselves.
Software Dependencies No The paper mentions using 'Pytorch framework (Paszke et al. 2019)' and 'TIMM model library (Wightman 2019)' and 'Synopsis Design Compiler with 32nm Technology', but does not provide specific version numbers for these software components, which are necessary for reproducible descriptions.
Experiment Setup Yes We start with a large upper bound of ϵη and then decrease this constraint every five iterations of the algorithm by about 50% until a valid solution cannot be found. This allows us to produce more effective embeddings while sacrificing some ESR. ... We introduce the additional ϵp constraint to ensure that the key sample retains some similarity to xk. ... In the supplementary materials, we provide an ablation study verifying the stability of the methodology with respect to these parameters.