Are Diffusion Models Vulnerable to Membership Inference Attacks?

Authors: Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, Kaidi Xu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that our methods precisely infer the membership with high confidence on both of the two scenarios across multiple different datasets.
Researcher Affiliation Collaboration 1Drexel University 2University of Electronic Science and Technology of China 3AWS AI Lab.
Pseudocode No The paper contains detailed mathematical formulations and definitions of the method (e.g., equations 3-13) but no explicitly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes Code is available at https: //github.com/jinhaoduan/Sec MI.
Open Datasets Yes multiple popular datasets, including CIFAR-10/100 (Krizhevsky et al., 2009), STL10-Unlabeled (STL10-U) (Coates et al., 2011), Tiny-Image Net (Tiny-IN) for DDPM, Pokemon (Pinkney, 2022) and COCO2017val (Lin et al., 2014) for LDMs, and Laion (Schuhmann et al., 2022) for Stable Diffusion.
Dataset Splits Yes For all the datasets, we randomly select 50% of the training samples as DM and use the rest of the training samples as DH. For example, CIFAR10 contains 50,000 images in the training set, so we have 25,000 images for DM and another 25,000 images for DH.
Hardware Specification No The paper does not explicitly describe the hardware used for running the experiments. It does not mention specific GPU or CPU models, or detailed cloud instance specifications.
Software Dependencies No The paper mentions using ResNet-18 as a backbone for the attack model but does not provide specific version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow, CUDA versions).
Experiment Setup Yes We set t SEC to 100 for all the experiments. For attack model f A, we choose Res Net-18 as the backbone and adopt 20% of DM and DH as its training data. f A is trained in 15 epochs with a learning rate of 0.001 and batch size of 128.