Differentially Private Learning with Adaptive Clipping
Authors: Galen Andrew, Om Thakkar, Brendan McMahan, Swaroop Ramaswamy
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that adaptive clipping to the median update norm works well across a range of realistic federated learning tasks, sometimes outperforming even the best fixed clip chosen in hindsight, and without the need to tune any clipping hyperparameter. To empirically validate the approach, we examine the behavior of our algorithm on six of the public benchmark federated learning tasks defined by Reddi et al. [24] |
| Researcher Affiliation | Industry | Galen Andrew galenandrew@google.com Om Thakkar omthkkr@google.com H. Brendan Mc Mahan mcmahan@google.com Swaroop Ramaswamy swaroopram@google.com |
| Pseudocode | Yes | Algorithm 1 DPFed Avg-M with adaptive clipping |
| Open Source Code | Yes | The code used for all of our experiments is publicly available at https://github.com/googleresearch/federated/blob/master/differential_privacy/run_federated.py. |
| Open Datasets | Yes | on six of the public benchmark federated learning tasks defined by Reddi et al. [24] |
| Dataset Splits | Yes | For all configurations, we report the best performing model whose server learning rate was chosen from this small grid on the validation set.4 |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Hyperparameters are as discussed in the text and used in the experiments of Section 3: ηC = 0.2, C0 = 0.1, m = 100, σb = m/20. The optimal baseline client and server learning rates for our experimental setup are shown in Table 1 (right). |