FedPop: A Bayesian Approach for Personalised Federated Learning

Authors: Nikita Kotelevskii, Maxime Vono, Alain Durmus, Eric Moulines

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide nonasymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks. ... In this section, we illustrate the benefits of our methodology on several FL benchmarks associated to both synthetic and real data.
Researcher Affiliation Collaboration Nikita Kotelevskii Skolkovo Institute of Science and Technology Moscow, Russia Nikita.Kotelevskii@skoltech.ru ... Maxime Vono Criteo AI Lab Paris, France m.vono@criteo.com ... Alain Durmus ENS Paris-Saclay alain.durmus@ens-paris-saclay.fr ... Eric Moulines Ecole Polytechnique eric.moulines@polytechnique.edu
Pseudocode Yes Algorithm 1 FL via Stochastic Optimisation using Unadjusted Kernel (Fed SOUK)
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplement.
Open Datasets Yes Real Data. We consider now real image data sets, namely CIFAR-10 and CIFAR-100 (Krizhevsky, 2009).
Dataset Splits No The paper describes data partitioning strategies across clients (e.g., '90% of the b = 100 clients have small data sets of size 5 and the remaining 10% have data sets of size 10' and 'assigning to each client Ni images belonging to only S different classes') and refers to 'all training images'. However, it does not explicitly provide details about standard train/validation/test dataset splits for the overall dataset within the main text.
Hardware Specification No The paper states that hardware details are postponed to the supplement ('This is postponed to the supplement.') and does not include specific GPU or CPU models, or other detailed hardware specifications in the main text.
Software Dependencies No The paper does not provide specific version numbers for software dependencies. While it implicitly uses deep learning frameworks, no explicit version details for libraries like PyTorch, TensorFlow, or Python are mentioned.
Experiment Setup No The paper mentions general experimental settings such as using '5-layer convolutional neural networks' and personalizing the 'last layer', setting 'b = 100' clients, and some data partitioning strategies. It also states 'using a small value of M [1, 10] was sufficient' and for one case 'M = 50'. However, detailed hyperparameters (e.g., learning rate, batch size, optimizer settings, specific number of epochs) are not explicitly provided in the main text, with the paper noting that 'Additional experiments and details about experimental design are provided in the supplement'.