SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning

Authors: Yifan Yang, Peiyao Xiao, Kaiyi Ji

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate the effectiveness of the proposed methods over existing FBO algorithms. In this section, we perform two hyper-representation experiments to compare the performance of our proposed Sim FBO algorithm with FBO-Agg ITD [69], Fed Nest [69], and LFed Nest [65], and validate the better performance of Shro FBO in the presence of heterogeneous local computation. We test the performance on MNIST and CIFAR datasets with MLP and CNN backbones.
Researcher Affiliation Academia Yifan Yang, Peiyao Xiao and Kaiyi Ji Department of Computer Science and Engineering University at Buffalo Buffalo, NY 14260 {yyang99, peiyaoxi, kaiyiji}@buffalo.edu
Pseudocode Yes Algorithm 1 Sim FBO and Shro FBO
Open Source Code No The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets Yes We test the performance on MNIST and CIFAR datasets with MLP and CNN backbones. The left and middle ones plot the training accuracy v.s. # of communication rounds on i.i.d. MNIST datasets with MLP networks, and the right one plots the training accuracy v.s. # of rounds on i.i.d. CIFAR-10 datasets with a 7-layer CNN.
Dataset Splits No The paper mentions using MNIST and CIFAR datasets and following the experimental setup from [65, 69], but it does not explicitly provide specific training/validation/test split percentages or sample counts in the provided text.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies No The paper mentions using MLP and CNN networks but does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper states, 'The details of all experimental specifications can be found in Appendix A.1.', but these details, including specific hyperparameters or training settings, are not present in the provided main text.