Federated Online and Bandit Convex Optimization

Authors: Kumar Kshitij Patel, Lingxiao Wang, Aadirupa Saha, Nathan Srebro

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our work is the first attempt towards a systematic understanding of federated online optimization with limited feedback, and it attains tight regret bounds in the intermittent communication setting for both first and zeroth-order feedback. Our results thus bridge the gap between stochastic and adaptive settings in federated online optimization.
Researcher Affiliation Collaboration Kumar Kshitij Patel 1 Lingxiao Wang 1 Aadirupa Saha 2 Nati Srebro 1 [...] 1TTIC 2Apple.
Pseudocode Yes Algorithm 1: Non-collaborative OGD (η) [...] Algorithm 2: FEDPOSGD (η, δ) with one-point bandit feedback [...] Algorithm 3: FEDOSGD (η, δ) with two-point bandit feedback
Open Source Code No The paper does not contain any statement about releasing open-source code for the described methods, nor does it provide a link to a code repository.
Open Datasets No The paper is theoretical and does not conduct experiments on datasets. It discusses 'function classes' such as FG,B and FH,B, which are mathematical definitions, not actual datasets.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any computational experiments or hardware specifications used for running them.
Software Dependencies No The paper is theoretical and does not mention any software implementations or dependencies with version numbers.
Experiment Setup No The paper describes theoretical choices for algorithm parameters (e.g., 'If we choose η = B G Kd1/4 o , and δ = Bd1/4 '), but these are part of the mathematical analysis and proofs, not specific hyperparameter values or training settings for an empirical experimental setup.