A Computational Separation between Private Learning and Online Learning

Authors: Mark Bun

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical A recent line of work has shown a qualitative equivalence between differentially private PAC learning and online learning: A concept class is privately learnable if and only if it is online learnable with a finite mistake bound. However, both directions of this equivalence incur significant losses in both sample and computational efficiency. Studying a special case of this connection, Gonen, Hazan, and Moran (Neur IPS 2019) showed that uniform or highly sample-efficient pure-private learners can be time-efficiently compiled into online learners. We show that, assuming the existence of one-way functions, such an efficient conversion is impossible even for general pure-private learners with polynomial sample complexity. This resolves a question of Neel, Roth, and Wu (FOCS 2019).
Researcher Affiliation Academia Mark Bun Department of Computer Science Boston University Boston, MA 02215 mbun@bu.edu
Pseudocode Yes Algorithm 1 Pure Private Learner for OWSd
Open Source Code No The paper does not provide a link to source code for the methodology described. It links to the full version of the paper on arXiv.
Open Datasets No The paper is theoretical and does not use a publicly available dataset for empirical training or evaluation. It defines the PAC model and sample complexity theoretically.
Dataset Splits No The paper is theoretical and does not perform empirical experiments requiring training/validation/test dataset splits.
Hardware Specification No The paper is theoretical and does not report on empirical experiments; therefore, no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not describe software dependencies with version numbers for empirical experiments.
Experiment Setup No The paper is theoretical and does not provide specific experimental setup details such as hyperparameters or training configurations for empirical experiments.