Privacy Attacks in Decentralized Learning

Authors: Abdellah El Mrini, Edwige Cyffers, Aurélien Bellet

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate the performance of our attacks on synthetic and real graphs and datasets. Our attacks prove to be effective in practice across various graph structures.
Researcher Affiliation Academia 1School of Computer and Communication Sciences, EPFL, Switzerland 2Universit e de Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRISt AL, F-59000 Lille, France 3Inria, Univ Montpellier, Montpellier, France.
Pseudocode Yes Algorithm 1 Knowledge matrix construction; Algorithm 2 Building the knowledge matrix for D-GD; Algorithm 3 Removing the attackers contributions; Algorithm 4 Building the covariance matrix
Open Source Code Yes Our code is available at https://github.com/Abdellah Elmrini/dec Attack.
Open Datasets Yes We first focus on the Cifar10 dataset (Krizhevsky, 2009); We use a small convolutional neural network (see details in Appendix G) on the MNIST dataset
Dataset Splits No The paper mentions using CIFAR10 and MNIST datasets but does not provide specific details about the training, validation, or test splits (e.g., percentages, sample counts, or explicit references to standard splits).
Hardware Specification No Experiments presented in this paper were carried out using the Grid 5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr).
Software Dependencies No The paper mentions using a "fully connected layer with a softmax activation" and a "small convolutional neural network" but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We start running our attack when the model is close to convergence so that gradients are more stable. [...] The learning rate plays an important role: it should be small enough to ensure that gradients do not vary wildly across iterations. We illustrate this behavior in Appendix H. [...] logistic regression, learning rate 10^-4; [...] convnet, learning rate 10^-6