Query Complexity of Bayesian Private Learning

Authors: Kuang Xu

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main result is a query complexity lower bound that is tight up to the first order. We show that if the learner wants to estimate the target within an error of , while ensuring that no adversary estimator can achieve a constant additive error with probability greater than 1/L, then the query complexity is on the order of L log(1/ ) as ! 0. Our result demonstrates that increased privacy, as captured by L, comes at the expense of a multiplicative increase in query complexity. The proof builds on Fano s inequality and properties of certain proportional-sampling estimators.
Researcher Affiliation Academia Kuang Xu Stanford Graduate School of Business Stanford, CA 94305, USA kuangxu@stanford.edu
Pseudocode Yes Figure 1 in Section A in the Supplemental Note contains the pseudo-code for Phase 2.
Open Source Code No The paper does not contain any explicit statements about making the source code for the described methodology publicly available, nor does it provide a link to a code repository.
Open Datasets No The paper is theoretical and defines a model for Bayesian Private Learning with a uniformly distributed target X in [0,1). It does not use or refer to any publicly available datasets for empirical training or evaluation.
Dataset Splits No The paper is theoretical and does not report empirical experiments with dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not report on experiments that would require specific hardware for execution.
Software Dependencies No The paper is theoretical and does not describe any computational experiments that would involve specific software dependencies with version numbers for reproducibility.
Experiment Setup No The paper is theoretical and focuses on mathematical proofs and definitions within a theoretical model, thus it does not include details about an experimental setup or hyperparameters.