From Robustness to Privacy and Back

Authors: Hilal Asi, Jonathan Ullman, Lydia Zakynthinou

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our work gives the first black-box transformation that converts any adversarially robust algorithm into one that satisfies pure differential privacy. Moreover, we show that for any low-dimensional estimation task, applying our transformation to an optimal robust estimator results in an optimal private estimator. Thus, we conclude that for any low-dimensional task, the optimal error rate for ε-differentially private estimators is essentially the same as the optimal error rate for estimators that are robust to adversarially corrupting 1/ε training samples. We apply our transformation to obtain new optimal private estimators for several high-dimensional tasks, including Gaussian (sparse) linear regression and PCA.
Researcher Affiliation Collaboration Hilal Asi 1 Jonathan Ullman 2 Lydia Zakynthinou 2 Authors ordered alphabetically. 1Apple 2Khoury College of Computer Sciences, Northeastern University, Boston, Massachusetts, USA.
Pseudocode Yes Algorithm 1 Robust-to-Private ((ε, δ)-DP) (page 5), Algorithm 2 Randomized-to-Deterministic Robust (Appendix A.1, page 9), and Algorithm 3 Robust-to-Private ((ε, δ)-DP), restatement of Algorithm 1 (Appendix B.2, page 12).
Open Source Code No The paper does not provide an unambiguous statement or a direct link to open-source code for the methodology described in the paper.
Open Datasets Yes Let S = (S1, . . . , Sn) where Si iid N(0, Σ). (Corollary 6.1, page 5), Let S = (S1, . . . , Sn) where Si iid N(µ, I) (Corollary C.1, page 13), Let S = (S1, . . . , Sn) where Si iid N(µ, Σ) (Corollary C.3, page 14). These refer to standard statistical distributions, implicitly available for theoretical analysis.
Dataset Splits No The paper is theoretical and focuses on deriving error bounds and optimality. It does not describe empirical experiments or provide specific dataset split information (like train/validation/test percentages or counts) needed to reproduce data partitioning.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments that would require specific hardware. Therefore, it does not provide specific hardware details (like GPU/CPU models, processor types, or memory amounts).
Software Dependencies No The paper is theoretical and does not involve implementing algorithms or conducting experiments. Therefore, it does not provide specific ancillary software details with version numbers.
Experiment Setup No The paper is theoretical and focuses on deriving error bounds. It does not contain specific experimental setup details, hyperparameter values, or training configurations.