The Price of Differential Privacy for Online Learning
Authors: Naman Agarwal, Karan Singh
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We design differentially private algorithms for the problem of online linear optimization in the full information and bandit settings with optimal T)1 regret bounds. In this paper, we design differentially private algorithms for online linear optimization with near-optimal regret, both in the full information and partial information (bandit) settings. |
| Researcher Affiliation | Academia | 1Computer Science, Princeton University, Princeton, NJ, USA. |
| Pseudocode | Yes | Algorithm 1 FTRL Template for OLO, Algorithm 2 Tree Based Agg(lt, B, t, D, T), Algorithm 3 FTPL Template for OLO, Algorithm 4 A0(A, D) Reduction to the Non-private Setting for Bandit Feedback, Algorithm 5 EXP2 with exploration ยต |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | No | The paper defines abstract "decision sets" (X) and "loss sets" (Y) for its theoretical models, but does not use or provide concrete, publicly available datasets for training or evaluation. |
| Dataset Splits | No | The paper focuses on theoretical analysis of algorithms and does not involve empirical evaluation on datasets with explicit training, validation, or test splits. |
| Hardware Specification | No | The paper focuses on theoretical algorithm design and analysis, and therefore does not specify any hardware used for experiments. |
| Software Dependencies | No | The paper presents theoretical algorithms and their analysis, and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not include details about an experimental setup, such as hyperparameters or training configurations. |