Lower Bounds and Optimal Algorithms for Personalized Federated Learning

Authors: Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtarik

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the practical superiority of our methods through extensive numerical experiments.
Researcher Affiliation Academia Filip Hanzely KAUST Thuwal, Saudi Arabia filip.hanzely@kaust.edu.sa Slavomír Hanzely KAUST Thuwal, Saudi Arabia slavomir.hanzely@kaust.edu.sa Samuel Horváth KAUST Thuwal, Saudi Arabia samuel.horvath@kaust.edu.sa Peter Richtárik KAUST Thuwal, Saudi Arabia peter.richtarik@kaust.edu.sa
Pseudocode Yes Algorithm 1 IAPGD +A
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository for the methodology described.
Open Datasets Yes Dataset: madelon, Dataset: a1a, Dataset: mushrooms, Dataset: duke in Figure 1, and the citation [7] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1 27, 2011. which refers to the source of these publicly available datasets.
Dataset Splits No The paper mentions 'Each client owns a random, mutually disjoint subset of the full dataset' but does not specify the train/validation/test splits (e.g., percentages or exact counts) for reproducibility.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) needed to replicate the experiments.
Experiment Setup No The paper states 'The remaining plots, as well as the details on the experimental setup, can be found in Section B of the Appendix,' indicating that these details are not provided in the main text of the paper.