Communication-Efficient Federated Non-Linear Bandit Optimization
Authors: Chuanhao Li, Chong Liu, Yu-Xiang Wang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations also demonstrate the effectiveness of the proposed algorithm. ... Our empirical evaluations show Fed-GO-UCB outperforms existing federated bandit algorithms, which demonstrates the effectiveness of generic non-linear function optimization, ... 6 EXPERIMENTS In order to evaluate Fed-GO-UCB s empirical performance and validate our theoretical results in Theorem 5, we conducted experiments on both synthetic and real-world datasets. |
| Researcher Affiliation | Academia | Chuanhao Li Chong Liu Yu-Xiang Wang Yale University University of Chicago University of California, Santa Barbara |
| Pseudocode | Yes | Algorithm 1 Fed-GO-UCB ... Algorithm 2 Distributed-GLD-Update |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | For synthetic dataset, we consider two test functions, f1(x) = P4 i=1 αi exp( P6 j=1 Aij(xj Pij)2) ... and f2(x) = 0.1 P8 i=1 cos(5πxi) P8 i=1 x2 i . ... Magic Telescope and Shuttle from the UCI Machine Learning Repository (Dua & Graff, 2017). |
| Dataset Splits | No | The paper does not specify explicit training, validation, and test splits for the datasets used in the experiments. It describes the total time steps (T) and number of clients (N) for the bandit simulation. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models or types of machines used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | For synthetic dataset, we consider two test functions, f1(x) = P4 i=1 αi exp( P6 j=1 Aij(xj Pij)2) ... and f2(x) = 0.1 P8 i=1 cos(5πxi) P8 i=1 x2 i . The decision set X is finite (with |X| = 50), and is generated by uniformly sampling from [0, 1]6 and [ 1, 1]8, respectively. We choose F to be a neural network with two linear layers, i.e., the model ˆf(x) = W2 σ(W1x + c1) + c2, where the parameters W1 R25,dx, c1 R25, W2 R25, c2 R, and σ(z) = 1/(1+exp( z)). ... T = 100 and N = 20. |