Private Online Learning via Lazy Algorithms

Authors: Hilal Asi, Tomer Koren, Daogao Liu, Kunal Talwar

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of private online learning, focusing on online prediction from experts (OPE) and online convex optimization (OCO). We propose a new transformation that translates lazy, low-switching online learning algorithms into private algorithms. We apply our transformation to differentially private OPE and OCO using existing lazy algorithms for these problems. The resulting algorithms attain regret bounds that significantly improve over prior art in the high privacy regime, where ε 1, obtaining O( T log d + T 1/3 log(d)/ε2/3) regret for DPOPE and O( d/ε2/3) regret for DP-OCO. We complement our results with a lower bound for DP-OPE, showing that these rates are optimal for a natural family of low-switching private algorithms.The paper does not include experiments.
Researcher Affiliation Collaboration Hilal Asi Apple hilal.asi94@gmail.com Tomer Koren Tel Aviv University tkoren@tauex.tau.ac.il Daogao Liu University of Washington liudaogao@gmail.com Kunal Talwar Apple kunal@kunaltalwar.org
Pseudocode Yes Algorithm 1: L2P 1 Input: Parameter η, measures {νt}t [T ], batch size B, fake switching parameter p ; 2 Sample x1, y1 ν1; 3 Observe ℓ1, . . . , ℓB and suffer loss PB i=1 ℓi(x1); 4 for s = 2, , T/B do 5 Sample Ss Ber min 1, νs(xs 1) e2Bηνs 1(xs 1) νs 1(ys 1) νs(ys 1) and S s Ber(1 p); 6 if Ss = 0 or S s = 0 then 7 Sample xs νs ; 9 Set xs = xs 1; 10 Sample As Ber(1 p); 11 if As = 0 then 12 Sample ys νs ; 14 Set ys = ys 1; 15 Play xs; 16 Observe ℓ(s 1)B+1, . . . , ℓs B and suffer loss Ps B i=(s 1)B+1 ℓi(xs);
Open Source Code No The paper does not include any explicit statements about releasing source code for the described methodology, nor does it provide a link to a code repository. The NeurIPS checklist for question 5 states "The paper does not include experiments requiring code".
Open Datasets No The paper is theoretical and does not describe any experiments involving training on datasets. The NeurIPS checklist for questions regarding reproducibility states "The paper does not include experiments."
Dataset Splits No The paper is theoretical and does not describe any experiments involving dataset splits for training, validation, or testing. The NeurIPS checklist for questions regarding reproducibility states "The paper does not include experiments."
Hardware Specification No The paper is theoretical and does not describe any experiments that would require hardware specifications. The NeurIPS checklist for questions regarding reproducibility states "The paper does not include experiments."
Software Dependencies No The paper is theoretical and does not describe any experiments that would require software dependencies with version numbers. The NeurIPS checklist for questions regarding reproducibility states "The paper does not include experiments."
Experiment Setup No The paper is theoretical and does not describe any experiments requiring specific experimental setup details like hyperparameters or training settings. The NeurIPS checklist for questions regarding reproducibility states "The paper does not include experiments."