User-Specified Local Differential Privacy in Unconstrained Adaptive Online Learning
Authors: Dirk van der Hoeven
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We derive the first algorithms that have adaptive regret bounds in this setting, i.e. our algorithms adapt to the unknown competitor norm, unknown noise, and unknown sum of the norms of the subgradients, matching state of the art bounds in all cases. The algorithms in this paper are built using the recently developed wealth-regret duality approach (Mcmahan and Streeter, 2012). We provide two algorithms. The first achieves the bound in (2). The second algorithm satisfies (2) for each dimension separately. |
| Researcher Affiliation | Academia | Dirk van der Hoeven Mathematical Institute Leiden University Leiden, 2333 CA dirkvderhoeven@gmail.com |
| Pseudocode | Yes | Algorithm 1 Local Differentially Private Adaptive Potential Function Algorithm 2 Black-Box Reduction Algorithm 3 Private Unconstrained Adaptive Sparse Gradient Descent |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of its source code. |
| Open Datasets | No | The paper is theoretical and does not mention the use of any datasets for training or evaluation, nor does it provide any information on public availability of datasets. |
| Dataset Splits | No | The paper is theoretical and does not describe any experiments or dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental setup, thus no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any software implementation details with specific version numbers for dependencies. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs; therefore, it does not include details on experimental setup or hyperparameters. |