Federated Bayesian Optimization via Thompson Sampling

Authors: Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the effectiveness of FTS in terms of communication efficiency, computational efficiency, and practical performance. We demonstrate the empirical effectiveness of FTS in terms of communication efficiency, computational efficiency, and practical performance using a landmine detection experiment and two activity recognition experiments using Google glasses and mobile phone sensors (Section 5).
Researcher Affiliation Academia Dept. of Computer Science, National University of Singapore, Republic of Singapore Dept. of Electrical Engineering and Computer Science, MIT, USA
Pseudocode Yes Algorithm 1 Federated Thompson Sampling (FTS)
Open Source Code No The paper does not provide any links to source code or explicit statements about its public release (e.g., on GitHub, supplementary material, or a specific repository).
Open Datasets Yes For real-world experiments, we use 3 datasets generated in federated settings that naturally contain heterogeneous agents [51]. Firstly, we use a landmine detection dataset in which the landmine fields are located at two different terrains [58]. Next, we use two activity recognition datasets collected using Google glasses [44] and mobile phone sensors [1]...
Dataset Splits No The paper states 'We use validation error as the performance metric for the two activity recognition experiments', but it does not provide specific details on the train/validation/test dataset splits (e.g., exact percentages, sample counts, or splitting methodology).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory specifications) used to run the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes For all experiments, we set PN to be uniform: PN[n] = 1/N, n = 1, . . . , N, and pt = 1 1/t2 for all t Z+ {1} and p1 = p2. We use validation error as the performance metric for the two activity recognition experiments, and use area under the receiver operating characteristic curve (AUC) to measure the performance of the landmine detection experiment... each of whom has completed a BO task of tn = 50 iterations. Since it has been repeatedly observed that the theoretical choice of βt that is used to establish the confidence interval is overly conservative [2, 52], we set it to a constant: βt = 1.0.