FedL2P: Federated Learning to Personalize

Authors: Royson Lee, Minyoung Kim, Da Li, Xinchi Qiu, Timothy Hospedales, Ferenc Huszar, Nicholas Lane

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that this framework improves on a range of standard hand-crafted personalization baselines in both label and feature shift situations.0
Researcher Affiliation Collaboration 1 University of Cambridge, UK 2 Samsung AI Center, Cambridge, UK 3 University of Edinburgh, UK 4 Flower Labs
Pseudocode Yes Algorithm 1 Fed L2P: FL of meta-nets for Personalization Hyperparameters; Algorithm 2 Hypergradient
Open Source Code Yes Code is available at https://github.com/royson/fedl2p
Open Datasets Yes CIFAR10 [31]. A widely-used image classification dataset, also popular as an FL benchmark. The number of clients C is set to 1000 and 20% of the training data is used for validation.
Dataset Splits Yes CIFAR10 [31]. A widely-used image classification dataset, also popular as an FL benchmark. The number of clients C is set to 1000 and 20% of the training data is used for validation.
Hardware Specification Yes We use the Flower federated learning framework [6] and 8 NVIDIA Ge Force RTX 2080 Ti GPUs for all experiments.
Software Dependencies No The paper mentions 'Flower federated learning framework' and 'torchvision [41]' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The learning rate (ζ) for λ = {wbn, wlr, η} is set to {10 3,10 3,10 4}, respectively. The hypergradient is clipped by value [ 1, 1], Q = 3, and ψ = 0.1 in Alg. 2. The maximum number of communication rounds is set to 500, and over the rounds we save the λ value that leads to the lowest validation loss, averaged over the participating clients, as the final learned λ. The fraction ratio r=0.1 unless stated otherwise, sampling 10% of the total number of clients per FL round.