Data Poisoning Attacks on Stochastic Bandits

Authors: Fang Liu, Ness Shroff

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our attack strategies by numerical results. Our attack strategies are efficient in forcing the bandit algorithms to pull a target arm at a relatively small cost. Our results expose a significant security threat as bandit algorithms are widely employed in the real world applications.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, 2Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA.
Pseudocode No The paper describes algorithms and strategies in text and mathematical formulas but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes All the simulations are run in MATLAB and the codes can be found in the supplemental materials.
Open Datasets No The paper uses simulated data generated according to specified distributions (Gaussian noise, uniformly distributed expected rewards) rather than a pre-existing publicly available dataset with concrete access information.
Dataset Splits No The paper describes simulations with a time horizon T (e.g., T=1000 or T=10^5 rounds) but does not specify distinct training, validation, or test dataset splits, as data is generated dynamically in the bandit setting.
Hardware Specification No The paper states 'All the simulations are run in MATLAB' but provides no specific details about the hardware used, such as CPU/GPU models or memory.
Software Dependencies No The paper mentions 'All the simulations are run in MATLAB' but does not provide a specific version number for MATLAB or any other software dependencies.
Experiment Setup Yes The bandit has K = 5 arms and the reward noise is a Gaussian distribution N(0, σ2) with σ = 0.1. We set T = 1000 and the error tolerance to δ = 0.05. Then we set the margin parameter as ξ = 0.001...