Responsible Disclosure of Generative Models Using Scalable Fingerprinting

Authors: Ning Yu, Vladislav Skripniuk, Dingfan Chen, Larry S. Davis, Mario Fritz

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our method fulfills key properties of a fingerprinting mechanism and achieves effectiveness in deep fake detection and attribution.
Researcher Affiliation Collaboration Ning Yu1,2,3 Vladislav Skripniuk4 Dingfan Chen4 Larry Davis2 Mario Fritz4 1Salesforce Research 2University of Maryland 3Max Planck Institute for Informatics 4CISPA Helmholtz Center for Information Security
Pseudocode No The paper includes diagrams and mathematical formulations for its pipeline and modulated convolutional layer, but it does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes Code and models are available at Git Hub.
Open Datasets Yes Datasets. We conduct experiments on Celeb A face dataset (Liu et al., 2015), LSUN Bedroom and Cat datasets (Yu et al., 2015).
Dataset Splits No The paper states: “We train/evaluate on 30k/30k Celeb A, 30k/30k LSUN Bedroom at the size of 128 × 128, and 50k/50k LSUN Cat at the size of 256 × 256.” While it specifies training and evaluation (test) splits, it does not explicitly mention or provide details for a separate validation split.
Hardware Specification Yes We train on 2 NVIDIA V100 GPUs with 16GB of memory each.
Software Dependencies No The paper mentions using “Adam optimizer” and that its code is “modified from the Git Hub repository of Style GAN2 (...) official Tensor Flow implementation”, but it does not specify version numbers for these or other software dependencies.
Experiment Setup Yes We set the length of latent code dz = 512. ... λ1 = 1.0, λ2 = 1.0, λ3 = 2.0, and λ4 = 2.0 are hyper-parameters to balance the magnitude of each loss term ... The learning rate η = 0.002 ... we set the batch size at 32.