Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning

Authors: Antonious Girgis, Deepesh Data, Suhas Diggavi

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We numerically demonstrate that, for important regimes, with composition our bound yields significant improvement in privacy guarantee over the state-of-the-art approximate Differential Privacy (DP) guarantee (with strong composition) for sub-sampled shuffled models. We also demonstrate numerically significant improvement in privacy-learning performance operating point using real data sets.
Researcher Affiliation Academia Antonious M. Girgis UCLA amgirgis@g.ucla.edu Deepesh Data UCLA deepesh.data@gmail.com Suhas Diggavi UCLA suhasdiggavi@ucla.edu
Pseudocode Yes Algorithm 1 Acldp: CLDP-SGD
Open Source Code No The paper does not provide a link to open-source code for the methodology, nor does it explicitly state that the code is being released or is available in supplementary materials.
Open Datasets Yes We consider the standard MNIST handwritten digit dataset that has 60, 000 training images and 10, 000 test images.
Dataset Splits No The paper mentions '60,000 training images and 10,000 test images' for the MNIST dataset but does not specify a separate validation split.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes At each step of the Algorithm 1, we choose uniformly at random 10, 000 clients, where each client clips the 1-norm of the gradient with clipping parameter C = 1/100 and applies the R1 0-LDP mechanism proposed in [27] with 0 = 1.5. We run Algorithm 1 with δ = 10 5 for 200 epochs, with learning rate = 0.3 for the first 70 epochs, and then decrease it to 0.18 in the remaining epochs.