Multi-Agent Best Arm Identification with Private Communications

Authors: Alexandre Rio, Merwan Barlier, Igor Colin, Marta Soare

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various settings support our theoretical findings.
Researcher Affiliation Collaboration 1Huawei Noah s Ark Lab, Paris, France 2Universit e d Orl eans, Universit e Grenoble Alpes, CNRS LIG, France.
Pseudocode Yes Algorithm 1 Multi-Agent Successive Elimination(MASE) (...) Algorithm 2 DP-MASE (...) Algorithm 3 CORRUPTED ELIMINATION (...) Algorithm 4 DP-MASE Local Elimination (...)
Open Source Code No The paper does not provide an explicit statement about open-sourcing code or links to a code repository for the methodology described.
Open Datasets Yes We assess the performance of our methods on different stochastic environments suggested in F eraud et al. (2019). In these problems, there are K = 10 arms with means µ1 = 0.7, µ2 = 0.5, µ3 = 0.3, and µk = 0.1 for k = 4, . . . , 10. (...) Problem 2: K = 10 arms with Gaussian distributions and means µ1 = 11, µ2 = 10.8, µk = 10.4 for k = 3, ..., 10.
Dataset Splits No The paper does not explicitly mention train, validation, or test dataset splits. It describes simulated 'stochastic environments' or 'problems' with specific arm configurations and distributions (Bernoulli, Gaussian) from prior work, rather than using traditional dataset splits for model training and evaluation.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only mentions general concepts like "multiple computing nodes".
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers).
Experiment Setup Yes We run experiments with N ranging from 64 to 1024 agents to evaluate how performance is affected by the scale of the problem. For both methods, we fix the global failure probability δ to 5%. Each data point is an average value over 10 runs, and 95% confidence intervals are shown for every plot. (...) We evaluate the performance of our algorithms for different values of the privacy parameters (ϵ for DP-MASE, η and ξ for CORRUPTED ELIMINATION). (...) The apparent privacy level η is set to 0.9. In accordance with Proposition 5.1, for each value of ξ, we compute ηξ = 1 1 η (1 ξ)K 1 , and then run local BAI subroutines with confidence 1 ηξ, which guarantees the same apparent privacy level and hence a fair comparison.