Differential Privacy without Sensitivity

Authors: Kentaro Minami, HItomi Arai, Issei Sato, Hiroshi Nakagawa

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our result extends the classical exponential mechanism, allowing the loss functions to have an unbounded sensitivity. In this paper, we focus on (ε, δ)-differential privacy of Gibbs posteriors with convex and Lipschitz loss functions. Our analysis widens the application ranges of the exponential mechanism in the following aspects (See also Table 1). (Removal of boundedness assumption) If the loss function is unbounded, which is usually the case when the parameter space is unbounded, the Gibbs posterior does not satisfy (ε, 0)differential privacy in general. Still, in some cases, we can build an (ε, δ)-differential private estimator. (Tighter evaluation of β) Even when the difference of the loss function is bounded, our analysis can yield a better scheme in determining the appropriate value of β for a given privacy level. (Shrinkage and contraction effect) Intuitively speaking, the Gibbs posterior becomes robust against a small change of the dataset, if the prior π has a strong shrinkage effect (e.g. a Gaussian prior with a small variance), or if the size of the dataset n tends to infinity. In our analysis, the upper bound of β depends on π and n, which explains such shrinkage and contraction effects. In this section, we give a formal proof of Theorem 7 and a proof sketch of 10.
Researcher Affiliation Academia Kentaro Minami The University of Tokyo kentaro minami@mist.i.u-tokyo.ac.jp Hiromi Arai The University of Tokyo arai@dl.itc.u-tokyo.ac.jp Issei Sato The University of Tokyo sato@k.u-tokyo.ac.jp Hiroshi Nakagawa The University of Tokyo nakagawa@dl.itc.u-tokyo.ac.jp
Pseudocode No The paper describes mathematical iterations for Langevin Monte Carlo, but this is not presented as a formally labeled or structured pseudocode or algorithm block.
Open Source Code No The paper does not include any statements or links indicating that source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and does not report on empirical training or provide access information for any dataset used for such purposes. It gives theoretical examples such as 'Bernoulli mean' and 'Gaussian mean'.
Dataset Splits No The paper is theoretical and does not conduct experiments that would require training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not provide any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or system-level training settings.