Generalized Linear Bandits with Local Differential Privacy
Authors: Yuxuan Han, Zhipeng Liang, Yang Wang, Jiheng Zhang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct experiments with both simulation and real-world datasets to demonstrate the consistently superb performance of our algorithms under LDP constraints with reasonably small parameters (ε, δ) to ensure strong privacy protection. |
| Researcher Affiliation | Academia | The Hong Kong University of Science and Technology |
| Pseudocode | Yes | Algorithm 1: LDP Single-parameter Contextual Bandit and Algorithm 2: LDP Multi-parameter Contextual Bandit |
| Open Source Code | Yes | The source code to reproduce all the results is available at the Git Hub repo liangzp/LDP-Bandit. |
| Open Datasets | Yes | On-Line Auto Lending dataset CRPM-12-001 provided by Columbia University https://www8.gsb. columbia.edu/cprm/research/datasets, and has been used in the study of contextual bandits by [26, 11]. |
| Dataset Splits | No | The paper does not explicitly provide specific percentages or counts for training, validation, or test splits. It mentions 'synthetic datasets' and 'real-world datasets' without specifying how they were partitioned. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running its experiments. It only mentions 'simulation' and 'real-world datasets' without hardware context. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. It refers to 'scikit-learn' through citation [25], but no version is specified. It also mentions 'LDP-UCB' and 'LDP-GLOC' as methods being compared, but these are not software dependencies in the sense of libraries with versions. |
| Experiment Setup | Yes | We evaluate all the four methods on two different privacy levels ε = 0.5 and 1 in synthetic datasets, which are industry standards. For the sake of comparison, the learning step parameter for LDP-GLOC and LDP-SGD are tuned in the same way. |