Differentially Private Federated Bayesian Optimization with Distributed Exploration

Authors: Zhongxiang Dai, Bryan Kian Hsiang Low, Patrick Jaillet

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also use real-world experiments to show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee (small privacy loss) and induces a trade-off between privacy and utility. [...] Next, we empirically demonstrate that DP-FTS-DE delivers an effective performance with a strong privacy guarantee and induces a favorable trade-off between privacy and utility in real-world applications (Sec. 5). [...] 5 Experiments
Researcher Affiliation Academia Dept. of Computer Science, National University of Singapore, Republic of Singapore Dept. of Electrical Engineering and Computer Science, MIT, USA
Pseudocode Yes Algorithm 1 DP-FTS-DE (central server) [...] Algorithm 2 BO-Agent-An(t, ωjoint t 1 = (ω(i) t 1)i [P ])
Open Source Code Yes Our code is here: https://github.com/daizhongxiang/ Differentially-Private-Federated-Bayesian-Optimization
Open Datasets Yes We adopt 3 commonly used datasets in FL and FBO [12, 60]. We firstly use a landmine detection dataset with N = 29 landmine fields [66] and tune 2 hyperparameters of SVM for landmine detection. Next, we use data collected using mobile phone sensors when N = 30 subjects are performing 6 activities [2] and tune 3 hyperparameters of logistic regression for activity classification. Lastly, we use the images of handwritten characters by N = 50 persons from EMNIST (a commonly used benchmark in FL) [8] and tune 3 hyperparameters of a convolutional neural network used for image classification.
Dataset Splits No The paper discusses the use of datasets but does not explicitly provide details about train/validation/test splits, percentages, or methodologies for creating these splits. It mentions 'validation error' in figures but not how the validation set was formed.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. It only mentions 'growing computational capability of edge devices' in the introduction, which is not an experimental setup detail.
Software Dependencies No The paper does not explicitly list software dependencies with version numbers.
Experiment Setup Yes In all 3 experiments, we choose P = 4, S = 22.0, M = 100, and 1 pt = 1/t. [...] For example, in the synthetic experiments, we set N = 200, Ninit = 10, M = 50, 1 pt = 1/t. In the landmine detection experiment, N = 29, Ninit = 10, P = 4, S = 22, M = 100, 1 pt = 1/t. For the human activity recognition, N = 30, Ninit = 10, P = 4, S = 22, M = 100, 1 pt = 1/t. For the EMNIST dataset, N = 50, Ninit = 10, P = 4, S = 22, M = 100, 1 pt = 1/t.