Smoothed Analysis of Online and Differentially Private Learning
Authors: Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries. In particular, we obtain regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations. |
| Researcher Affiliation | Academia | Nika Haghtalab Department of Computer Science Cornell University nika@cs.cornell.edu Tim Roughgarden Department of Computer Science Columbia University tr@cs.columbia.edu Abhishek Shetty Department of Computer Science Cornell University shetty@cs.cornell.edu |
| Pseudocode | No | The paper describes algorithms like MWEM and Smooth MWEM conceptually but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the release of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not use or provide access information for any specific publicly available or open datasets for experimentation. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental setups with dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not describe any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe specific experimental setup details such as hyperparameters or training configurations. |