Personalization Improves Privacy-Accuracy Tradeoffs in Federated Learning
Authors: Alberto Bietti, Chen-Yu Wei, Miroslav Dudik, John Langford, Steven Wu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate our theoretical results with experiments on synthetic and real-world datasets. In this section, we present numerical experiments that illustrate our theoretical guarantees on both synthetic and real-world federated learning datasets. |
| Researcher Affiliation | Collaboration | 1Center for Data Science, New York University 2University of Southern California 3Microsoft Research, New York 4Carnegie Mellon University. |
| Pseudocode | Yes | Algorithm 1 Personalized-Private-SGD (PPSGD) Algorithm 2 PPSGD with client sampling Algorithm 3 PPSGD with client sampling, average user performance |
| Open Source Code | Yes | Our code is available at https://github.com/albietz/ppsgd. |
| Open Datasets | Yes | Stackoverflow dataset: https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/stackoverflow/load_data federated EMNIST digit classification dataset: https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist/load_data |
| Dataset Splits | No | The paper mentions "training and test documents" or "training and test samples" for the datasets used, but does not explicitly describe a separate validation split or cross-validation setup. |
| Hardware Specification | No | The paper mentions running experiments but does not specify any hardware details like GPU/CPU models, memory, or specific computing platforms. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | Experiment setup. For each run, we consider a fixed step-size η, personalization parameter α, clipping parameter C, and noise multiplier σ, chosen from a grid (see Appendix A). In order to assess the best achievable performance, we optimize the learning rate separately at each number of iterations reported in our figures. Appendix A. Experiment details: We provide the hyperparameter grids for each dataset below. Our experiments always optimize the step-size at any fixed iteration. |