Differentially Private Decentralized Learning with Random Walks

Authors: Edwige Cyffers, Aurélien Bellet, Jalaj Upadhyay

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We supplement our theoretical results with empirical evaluation of synthetic and real-world graphs and datasets. and In this section, we illustrate our results numerically on synthetic and real graphs and datasets and show that our random walk approach achieves superior privacy-utility trade-offs compared to gossip as long as the mixing time of the graph is good enough.
Researcher Affiliation Academia 1Université de Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRISt AL, F-59000 Lille, France 2Inria, Univ Montpellier, Montpellier, France 3Rutgers University.
Pseudocode Yes Algorithm 1: PRIVATE RANDOM WALK GRADIENT DESCENT (RW DP-SGD)
Open Source Code Yes The code is available at https://github.com/totilas/DPrandomwalk
Open Datasets Yes We train a logistic regression model on a binarized version of the UCI Housing dataset.1 and 1https://www.openml.org/d/823/
Dataset Splits Yes We standardize the features, normalize each data point, and split the dataset uniformly at random into a training set (80%) and a test set (20%).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes We set ε = 1 and δ = 10 6. and Following Cyffers et al. (2022), we use the mean privacy loss over all pairs of nodes (computed by applying Theorem 3) to set the noise level needed for our random walk-based DP-SGD.