Shuffle Private Stochastic Convex Optimization

Authors: Albert Cheu, Matthew Joseph, Jieming Mao, Binghui Peng

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present interactive shuffle protocols for stochastic convex optimization. Our protocols rely on a new noninteractive protocol for summing vectors of bounded ℓ2 norm. By combining this sum subroutine with mini-batch stochastic gradient descent, accelerated gradient descent, and Nesterov s smoothing method, we obtain loss guarantees for a variety of convex loss functions that significantly improve on those of the local model and sometimes match those of the central model. Table 1: The guarantees proved in this paper.
Researcher Affiliation Collaboration Albert Cheu Georgetown University ac2305@georgetown.edu Matthew Joseph Google Research mtjoseph@google.com Jieming Mao Google Research maojm@google.com Binghui Peng Columbia University bp2601@columbia.edu
Pseudocode Yes Algorithm 1 P1D, a shuffle protocol for summing scalars, Algorithm 2 PVEC, a shuffle protocol for vector summation, Algorithm 3 PSGD, Sequentially interactive shuffle private SGD, Algorithm 4 PAGD, Sequentially interactive shuffle private AC-SA, Algorithm 5 PGD, Fully interactive shuffle private gradient descent, Algorithm 6 Pan-private AC-SA
Open Source Code No The paper does not contain an explicit statement about releasing source code for the methodology described, nor does it provide any links to a code repository.
Open Datasets No The paper is theoretical and focuses on algorithm design and theoretical guarantees; it does not describe empirical studies or the use of any datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not describe empirical studies, therefore, it does not specify training/validation/test dataset splits.
Hardware Specification No The paper is theoretical and does not describe experimental setup or hardware used for running experiments.
Software Dependencies No The paper is theoretical and describes algorithms and proofs, without mentioning specific software dependencies or version numbers needed for implementation.
Experiment Setup No The paper is theoretical and focuses on algorithm design and theoretical guarantees; therefore, it does not provide specific experimental setup details such as hyperparameters or training configurations.