Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

Authors: Rishav Chourasia, Jiayuan Ye, Reza Shokri

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our analysis traces a provably tight bound on the Rényi divergence between the pair of probability distributions over parameters of models trained on neighboring datasets. We prove that the privacy loss converges exponentially fast, for smooth and strongly convex loss functions, which is a significant improvement over composition theorems (which over-estimate the privacy loss by upper-bounding its total value over all intermediate gradient computations).We present a novel analysis for privacy dynamics of noisy gradient descent with smooth and strongly convex loss functions. We construct a pair of continuous-time Langevin diffusion [20] processes that trace the probability distributions over the model parameters of noisy GD.
Researcher Affiliation Academia Rishav Chourasia*, Jiayuan Ye*, Reza Shokri Department of Computer Science, National University of Singapore {rishav1, jiayuan, reza}@comp.nus.edu.sg
Pseudocode Yes Algorithm 1 ANoisy-GD: Noisy Gradient Descent
Open Source Code No The paper does not provide any statement or link regarding the release of source code for the described methodology.
Open Datasets No The paper is theoretical and focuses on modeling the dynamics of Rényi differential privacy. It refers to 'neighboring datasets D and D'' in a theoretical context but does not use specific, publicly available datasets for experimental training.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments, therefore it does not specify training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experimental setup requiring hardware specifications.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe a practical experimental setup with specific hyperparameters or training settings.