Toward effective protection against diffusion-based mimicry through score distillation

Authors: Haotian Xue, Chumeng Liang, Xiaoyu Wu, Yongxin Chen

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to support our arguments above. Following (Salman et al., 2023; Liang et al., 2023), we conduct experiments on i) global image-to-image edit, ii) image-to-image inpainting, and iii) textual inversion.
Researcher Affiliation Academia Haotian Xue 1 Chumeng Liang 2,3 Xiaoyu Wu 2 Yongxin Chen 1 1 Georgia Institute of Technology 2 Shanghai Jiao Tong University 3 University of Southern California
Pseudocode Yes A DETAILS ABOUT ALGORITHMS We provide a Py Torch-styled pseudo code to show how to attack the latent diffusion model (LDM) with SDS acceleration, and how does the gradient descent work:
Open Source Code Yes Codes for this paper are available in https://github.com/xavihart/Diff-Protect.
Open Datasets Yes We collect the anime and portrait data from the internet, the landscape data from (Arnaud, 2020), and the artworks subset from Wiki Art (Nichol, 2016).
Dataset Splits No The paper describes the datasets used and mentions training for textual inversion, but it does not provide specific details on how validation sets were created or used with explicit splits (e.g., percentages or sample counts for training/validation/test sets).
Hardware Specification Yes All the threat model experiments in this paper can be run on one single A6000 GPU without parallelization.
Software Dependencies No The paper mentions using Py Torch and the Diffusers library, but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes For all the methods, we use δ = 16/255 as the ℓ budget, α = 1/255 as the step size and run 100 iterations in the format of PGD attacks.