Single-Model Attribution of Generative Models Through Final-Layer Inversion
Authors: Mike Laszkiewicz, Jonas Ricker, Johannes Lederer, Asja Fischer
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The theoretical findings are accompanied by an experimental study demonstrating the effectiveness of our approach and its flexibility to various domains. |
| Researcher Affiliation | Academia | 1Faculty of Computer Science, Ruhr University Bochum, Germany 2Department of Mathematics, Computer Science, and Natural Sciences, University of Hamburg, Germany. |
| Pseudocode | No | The paper describes its methods using mathematical equations and prose but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | All experiments can be reproduced using our public code repository.4 https://github.com/MikeLasz/flipad |
| Open Datasets | Yes | Celeb A (Liu et al., 2015) or the LSUN bedroom (Yu et al., 2015) dataset. (...) Stable Diffusion (Rombach et al., 2022). (...) COCO2014 annotations (Lin et al., 2014) (...) FFHQ (Karras et al., 2019). (...) BCDR (Lopez et al., 2012), (...) Redwine dataset (Cortez et al., 2009). |
| Dataset Splits | Yes | We select the threshold τ in (7) by fixing a false negative rate fnr and set τ as the (1 fnr)-quantile of {s(x)}xi Xval, where s(x) is the anomaly score function and Xval is a validation set consisting of nval inlier samples (i.e., generated by G). We specify nval for each experiment is the following sections. In all of our experiments, we set fnr = 0.005 or fnr = 0.05 in the case of the Stable Diffusion experiments. (...) All experiments on Celeb A and LSUN utilize ntr = 10 000 and nval = ntest = 1 000 samples. (...) We set ntr = 2 000, nval = 100, ntest = 200. |
| Hardware Specification | Yes | All computations were conducted on an NVIDIA A40 GPU. |
| Software Dependencies | No | The paper mentions software like 'diffusers library', 'medigan library', 'SDV library', and 'Adam' optimizer, but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We solved the optimization problem using FISTA (Beck & Teboulle, 2009) with the regularization parameter λ set to 0.0005 for the experiments involving the models from Section G.2, to 0.00001 for the Stable Diffusion experiments, to 0.001 for the medical image experiments, and to 0.0005 for the Redwine and Whitewine experiments. (...) We trained ϕ using the adam optimizer for 50 epochs with a learning rate of 0.0005, which we reduce to 5e 5 after 25 epochs, and a weight-decay of 0.5e 6. In the Stable Diffusion, Style GAN, and tabular experiments, we increase the number of epochs to 100 and reduce the learning rate after 25 and 50 epochs to 0.5e 6 and 0.5e 7, respectively. |