Privacy Amplification via Random Check-Ins
Authors: Borja Balle, Peter Kairouz, Brendan McMahan, Om Thakkar, Abhradeep Guha Thakurta
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our analytical arguments (Theorems 3.5 and 4.2) formally demonstrate order-optimal utility/privacy trade-offs for convex models. While we agree that this line of research should eventually demonstrate empirical evidence for efficacy, theoretical conclusions do act as guiding principles, and the novel privacy amplification results are interesting and relevant on their own. |
| Researcher Affiliation | Industry | Deep Mind. borja.balle@gmail.com Google. {kairouz, mcmahan, omthkkr, athakurta}@google.com |
| Pseudocode | Yes | Algorithm 1: Afix Distributed DP-SGD with random check-ins (fixed window).", "Algorithm 2: Aavg Distributed DP-SGD with random check-ins (averaged updates). |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing code or a link to a source code repository for the described methodology. |
| Open Datasets | No | The paper defines a theoretical data setup with 'n clients' and 'data record dj D', but does not mention the use of any specific publicly available or open datasets by name, link, or formal citation. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental data splits for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate an experiment. |
| Experiment Setup | No | The paper does not contain specific experimental setup details such as concrete hyperparameter values or training configurations. |