Dropout Is NOT All You Need to Prevent Gradient Leakage

Authors: Daniel Scheliga, Patrick Maeder, Marco Seeland

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct an extensive systematic evaluation of our attack on four seminal model architectures and three image classification datasets of increasing complexity. We find that our proposed attack bypasses the protection seemingly induced by dropout and reconstructs client data with high fidelity.
Researcher Affiliation Academia 1 Technische Universit at Ilmenau, Germany 2 Friedrich Schiller Universit at Jena, Germany
Pseudocode Yes Algorithm 1: Dropout Inversion Attack
Open Source Code Yes We provide a PyTorch implementation of DIA3. 3https://github.com/dAI-SY-Group/Dropout Inversion Attack
Open Datasets Yes We use MNIST (Deng 2012) and CIFAR-10 (Krizhevsky, Hinton et al. 2009) datasets... For experiments on Image Net (Russakovsky et al. 2015)...
Dataset Splits No The paper mentions 'train and test splits' and refers to a 'victim client dataset of 128 images from the training data', but does not explicitly provide details about a validation dataset split (e.g., percentages, sample counts, or methodology for creation).
Hardware Specification No The paper does not explicitly mention any specific hardware (e.g., GPU models, CPU types, or memory specifications) used for running the experiments.
Software Dependencies No The paper mentions using a 'PyTorch implementation' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes More details on the experimental setup can be found in Section 5. ... Dropout rates were selected as p {0, 0.25, 0.50, 0.75}. ... We observe that the joint optimization of dummy data and dropout masks in DIA finds a suitable approximation FΨA FΨC that allows to reconstruct the client data. ... we tune the impact of Ω(ΨA) by weighting with λmask {10 4, 10 3, 10 2}.