On the Privacy-Robustness-Utility Trilemma in Distributed Learning
Authors: Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section E.1, we present our experimental setup. In Section E.2, we report our empirical results. |
| Researcher Affiliation | Academia | 1Ecole Polytechnique F ed erale de Lausanne (EPFL), Switzerland. |
| Pseudocode | Yes | Algorithm 1 SAFE-DSHB |
| Open Source Code | No | The code we use to launch the different experiments will be made available. |
| Open Datasets | Yes | We train a logistic regression model of d = 69 parameters on the academic Phishing5 dataset. (footnote 5: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/) |
| Dataset Splits | No | The paper mentions using the Phishing dataset and shows 'Test accuracy' and 'Training Loss' but does not specify the dataset splits (e.g., train/validation/test percentages or counts) or the methodology used for splitting. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | We use Opacus (Yousefpour et al., 2021), a DP library for deep learning in Py Torch (Paszke et al., 2019). |
| Experiment Setup | Yes | We train the model using a fixed learning rate γ = 1 over a total of T = 400 learning steps. We set the clipping threshold C = 1 and the batch size b = 25. We run all algorithms, except DSGD, with momentum β = 0.99. |