Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization
Authors: Bargav Jayaraman, Lingxiao Wang, David Evans, Quanquan Gu
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real world data sets demonstrate that our methods provide substantial utility gains for typical privacy requirements. |
| Researcher Affiliation | Academia | Bargav Jayaraman Department of Computer Science University of Virginia Charlottesville, VA 22903 bj4nq@virginia.edu Lingxiao Wang Department of Computer Science University of California, Los Angeles Los Angeles, CA 90095 lingxw@cs.ucla.edu David Evans Department of Computer Science University of Virginia Charlottesville, VA 22903 evans@virginia.edu Quanquan Gu Department of Computer Science University of California, Los Angeles Los Angeles, CA 90095 qgu@cs.ucla.edu |
| Pseudocode | No | The paper describes methods textually but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code: https://github.com/bargavj/distributedMachine Learning.git |
| Open Datasets | Yes | For classification, we use a regularized logistic regression model over the KDDCup99 [25] data set (additional experiments on the Adult [2] data set yield similar results, described in Appendix B.3). ... For regression, we train a ridge regression model over the KDDCup98 [40] data set... |
| Dataset Splits | No | We randomly sample 70,000 records and divide it into training set of 50,000 records and test set of 20,000 records. (Only training and test sets are explicitly mentioned, not a separate validation set split.) |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | For all the experiments, we set Lipschitz constant G = 1, learning rate η = 1, regularization coefficient λ = 0.001, privacy budget ϵ = 0.5, failure probability δ = 0.001 and total number of iterations T = 1, 500 for gradient descent. |