User-level Private Stochastic Convex Optimization with Optimal Rates

Authors: Raef Bassily, Ziteng Sun

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of differentially private (DP) stochastic convex optimization (SCO) under the notion of user-level differential privacy... Under smoothness conditions of the loss, we establish the optimal rates for user-level DP-SCO in both the central and local models of DP... Our algorithms combine new user-level DP mean estimation techniques with carefully designed first-order stochastic optimization methods. The paper is full of theorems, lemmas, and proofs, and presents algorithms as pseudocode (Algorithm 1, 2, 3, 4, 5) without reporting on their empirical performance on real datasets.
Researcher Affiliation Collaboration 1Department of Computer Science Engineering and the Translational Data Analytics Institute (TDAI), The Ohio State University 2Google Research, New York.
Pseudocode Yes Algorithm 1 Truncated mean estimation, Algorithm 2 LDP Range scalar, Algorithm 3 LDP Range High Dim, Algorithm 4 User-level private noisy SGD, Algorithm 5 User-level LDP SCO.
Open Source Code No The paper makes no mention of releasing source code for the described methods, nor does it provide any links to a code repository.
Open Datasets No The paper is theoretical and focuses on mathematical proofs and algorithms. It refers to "samples from P" as a general distribution but does not use or provide access to any specific, publicly available datasets.
Dataset Splits No The paper is theoretical and does not describe any empirical experiments, thus no dataset splits for training, validation, or testing are provided.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any empirical implementations, therefore no software dependencies with specific version numbers are mentioned.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and algorithms, thus it does not provide specific experimental setup details such as hyperparameter values or training configurations.