Locally Differentially Private (Contextual) Bandits Learning
Authors: Kai Zheng, Tianle Cai, Weiran Huang, Zhenguo Li, Liwei Wang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This work is mostly theoretical, with no negative outcomes. |
| Researcher Affiliation | Collaboration | Kai Zheng1 zhengk92@gmail.com Tianle Cai2,3 caitianle1998@pku.edu.cn Weiran Huang4 weiran.huang@outlook.com Zhenguo Li4 li.zhenguo@huawei.com Liwei Wang5,6, wanglw@cis.pku.edu.cn 1 Kwai Inc. 2 School of Mathematical Sciences, Peking University 3 Haihua Institute for Frontier Information Technology 4 Huawei Noah s Ark Lab 5 Key Laboratory of Machine Perception, MOE, School of EECS, Peking University 6 Center for Data Science, Peking University |
| Pseudocode | Yes | Algorithm 1: One-Point Bandits Learning-LDP, Algorithm 2: Two-Point Feedback Private Bandit Convex Optimization via Black-box Reduction, Algorithm 3: Generalized Linear Bandits with LDP |
| Open Source Code | No | Appendix could be found in the full version [43]. |
| Open Datasets | No | The paper focuses on theoretical analysis of algorithms and regret bounds, without specifying or using any publicly available datasets for empirical training. |
| Dataset Splits | No | The paper is theoretical and does not report on empirical experiments that would involve dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU specifications, or cloud resources used for running experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters, training configurations, or system-level settings. |