Black-Box Differential Privacy for Interactive ML

Authors: Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work we revisit an interactive variant of joint differential privacy, recently introduced by Naor et al. [2023], and generalize it towards handling online processes in which existing privacy definitions seem too restrictive. We study basic properties of this definition and demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing. In order to demonstrate the advantages of this privacy definition compared to traditional forms of differential privacy, we consider the basic setting of online classification. We show that any (possibly non-private) learning rule can be effectively transformed to a private learning rule with only a polynomial overhead in the mistake bound.
Researcher Affiliation Collaboration Haim Kaplan Yishay Mansour Shay Moran Kobbi Nissim Uri Stemmer Tel Aviv University and Google Research. haimk@tau.ac.il. Tel Aviv University and Google Research. mansour.yishay@gmail.com. Technion and Google Research. smoran@technion.ac.il. Georgetown University. kobbi.nissim@georgetown.edu. Tel Aviv University and Google Research. u@uri.co.il.
Pseudocode Yes Algorithm 1 Above Threshold; Algorithm 2 Adaptive Query M,B,Q; Algorithm 3 Adaptive Input M,B; Algorithm 4 Online Game M,B,T,g; Algorithm 5 Composition Game B ,m,ε,δ; Algorithm 6 Challenge AT; Algorithm 7 POP (Private Online Procedure); Algorithm 8 General Game M,B,T; Algorithm 9 Challenge AT-Game B; Algorithm 10 CATG-Above Threshold; Algorithm 11 B; Algorithm 12 Coin Game B,k,m
Open Source Code No The paper does not provide any explicit statements about making its source code available or include links to code repositories.
Open Datasets No The paper is theoretical and defines abstract concepts like 'X be the domain, Y be the label space, and Z = X Y be set of labeled examples'. It does not refer to any specific, publicly available datasets.
Dataset Splits No As this is a theoretical paper focusing on mathematical proofs and algorithm design, it does not describe experimental setups with dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experimental hardware specifications such as GPU or CPU models.
Software Dependencies No The paper is theoretical and focuses on algorithm design and proofs. It does not list any specific software dependencies with version numbers required for reproducibility.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or system-level training settings.