Differentially Private Multi-Armed Bandits in the Shuffle Model
Authors: Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We give an (ε, δ)-differentially private algorithm for the multi-armed bandit (MAB) problem in the shuffle model with a distribution-dependent regret of... Our algorithms Shuffle Differentially Private Arm Elimination (SDP-AE) and Variable Batch Shuffle Differentially Private Arm Elimination (VB-SDP-AE) are both based on the well-known arm elimination (AE) algorithm, using consecutive batches of users and together with an SDP private binary summation mechanism. We show that the simpler but weaker SDP-AE achieves a distribution-dependent regret of O P a [k]: a>0 log T a + k log 1 δ ε2, and a distribution-independent regret of O k T log T + k log 1 δ ε2 . We then describe VB-SDP-AE, a generalization of SDP-AE to exponentially growing batch sizes, and prove it has a distribution-dependent regret of O P a [k]: a>0 log T a + k log T δ ε , and a distribution-independent regret of O k T log T + k log T δ ε . |
| Researcher Affiliation | Collaboration | Jay Tenenbaum Haim Kaplan Yishay Mansour Uri Stemmer Google Research. jayten@google.com. Blavatnik School of Computer Science, Tel Aviv University and Google Research. haimk@tau.ac.il. Blavatnik School of Computer Science, Tel Aviv University and Google Research. mansour.yishay@gmail.com. Blavatnik School of Computer Science, Tel Aviv University and Google Research. u@uri.co.il. |
| Pseudocode | Yes | Algorithm 1: SDP-AE (Shuffle Differentially Private Arm Elimination) Algorithm 2: VB-SDP-AE (Variable Batch Shuffle Differentially Private Arm Elimination) |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the methodology. In the ethics checklist, under '3. If you ran experiments...', point (a) 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' is marked as '[N/A]'. |
| Open Datasets | No | The paper is theoretical and does not describe experiments involving datasets for training. In the ethics checklist, under '3. If you ran experiments...', point (a) 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' is marked as '[N/A]'. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments involving dataset splits (training, validation, test). In the ethics checklist, under '3. If you ran experiments...', point (b) 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?' is marked as '[N/A]'. |
| Hardware Specification | No | The paper is theoretical and does not describe computational experiments, therefore, it does not specify hardware used. In the ethics checklist, under '3. If you ran experiments...', point (d) 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?' is marked as '[N/A]'. |
| Software Dependencies | No | The paper is theoretical and does not describe computational experiments, therefore, it does not specify software dependencies with version numbers. In the ethics checklist, under '3. If you ran experiments...', point (a) 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' is marked as '[N/A]'. |
| Experiment Setup | No | The paper is theoretical and does not describe computational experiments with specific setup details like hyperparameters or training settings. In the ethics checklist, under '3. If you ran experiments...', point (b) 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?' is marked as '[N/A]'. |