Achieving Privacy in the Adversarial Multi-Armed Bandit

Authors: Aristide Tossou, Christos Dimitrakakis

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we run experiments that clearly demonstrate the validity of our theoretical analysis.
Researcher Affiliation Academia Aristide C. Y. Tossou Chalmers University of Technology Gothenburg, Sweden aristide@chalmers.se Christos Dimitrakakis University of Lille, France Chalmers University of Technology, Sweden Harvard University, USA christos.dimitrakakis@gmail.com
Pseudocode Yes Algorithm 1 DP-EXP3-Lap
Open Source Code No The paper does not provide concrete access to source code for the described methodology.
Open Datasets No The paper describes generating gains based on different adversary models (e.g., 'Bern (0.55)', 'Bern (0.5)') rather than using a pre-existing publicly available dataset, and no access information for such generated data is provided.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits, as it operates in an online adversarial multi-armed bandit setting where gains are generated dynamically rather than from a pre-existing dataset with fixed splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions) needed to replicate the experiment.
Experiment Setup Yes For all experiments, the horizon is T = 2^18 and the number of arms is K = 4. We performed 720 independent trials and reported the median-of-means estimator... We set the number of groups to a0 = 24...