FedLPA: One-shot Federated Learning with Layer-Wise Posterior Aggregation
Authors: Xiang Liu, Liangxi Liu, Feiyang Ye, Yunheng Shen, Xia Li, Linshan Jiang, Jialin Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that Fed LPA significantly improves learning performance over state-of-the-art methods across several metrics. |
| Researcher Affiliation | Academia | Xiang Liu1, , Liangxi Liu2, , Feiyang Ye3, Yunheng Shen4, Xia Li5, Linshan Jiang1, , Jialin Li1, 1National University of Singapore, 2Northeastern University, 3University of Technology Sydney, 4Tsinghua University, 5ETH Zurich |
| Pseudocode | Yes | Algorithm 1 Fed LPA Global Aggregation |
| Open Source Code | Yes | Our Fed LPA is available in https://github.com/lebronlambert/Fed LPA_Neur IPS2024. |
| Open Datasets | Yes | We conduct experiments on MNIST [63], Fashion-MNIST [64], CIFAR-10 [65], and SVHN [66] datasets. |
| Dataset Splits | Yes | We use the data partitioning methods for non-IID settings of the benchmark 1 to simulate different label skews. Specifically, we try two different kinds of partition: 1) #C = k: each client only has data from k classes. ... 2) pk Dir(β): for each class, we sample from Dirichlet distribution pk and distribute pk,j portion of class k samples to client j. |
| Hardware Specification | Yes | We conduct experiments on CIFAR-10 on a single 2080Ti GPU to estimate the overall communication and computation overhead. |
| Software Dependencies | No | The paper mentions 'Pytorch' in the context of floating point precision ('The default floating point precision is 32 bits in Pytorch.'), but does not specify a version number or list other software dependencies with version numbers. |
| Experiment Setup | Yes | We set the batch size to 64, the learning rate to 0.001, and the λ = 0.001 for Fed LPA. By default, we set 10 clients and run 200 local epochs for each client. |