On Differentially Private Federated Linear Contextual Bandits

Authors: Xingyu Zhou, Sayak Ray Chowdhury

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we support our theoretical results with numerical evaluations over contextual bandit instances generated from both synthetic and real-life data. and 6 SIMULATION RESULTS AND CONCLUSIONS We evaluate regret performance of Algorithm 1 under silo-level LDP and SDP, which we abbreviate as LDP-Fed Lin UCB and SDP-Fed Lin UCB, respectively.
Researcher Affiliation Collaboration Xingyu Zhou Wayne State University, USA Email: xingyu.zhou@wayne.edu Sayak Ray Chowdhury Microsoft Research, India Email: t-sayakr@microsoft.com
Pseudocode Yes Algorithm 1 Private-Fed Lin UCB
Open Source Code No The paper does not explicitly state that the code for the described methodology is open-source or provide a link to a code repository.
Open Datasets Yes We generate bandit instances from Microsoft Learning to Rank dataset (Qin & Liu, 2013).
Dataset Splits No The paper discusses the use of synthetic and real-life data for evaluation but does not specify explicit training/validation/test dataset splits (e.g., percentages, sample counts, or predefined splits).
Hardware Specification No The paper does not specify any particular hardware components (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks) used for implementation or experimentation.
Experiment Setup Yes We fix confidence level α=0.01, batchsize B = 25 and study comparative performances under varying privacy budgets ε, δ.