Fair Federated Learning via the Proportional Veto Core
Authors: Bhaskar Ray Chaudhury, Aniket Murhekar, Zhuowen Yuan, Bo Li, Ruta Mehta, Ariel D. Procaccia
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate that Rank-Core Fed outperforms baselines in terms of fairness on different datasets. |
| Researcher Affiliation | Academia | 1University of Illinois Urbana-Champaign 2University of Chicago 3Harvard University. |
| Pseudocode | Yes | Algorithm 1 Computes a representative set of P; Algorithm 2 Rank-Core-Fed: Finds a θ that belongs to the proportional veto core of P |
| Open Source Code | No | The paper does not provide any explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate our algorithm Rank-Core-Fed on rotated MNIST (Le Cun et al., 2010) and CIFAR-10 (Krizhevsky et al., 2009) datasets. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits, specific percentages, or sample counts. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions the use of CNN and VGG11 models but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | For MNIST, we use a CNN, which has two 5 5 convolution layers followed by two fully connected layers with Re LU activation. For CIFAR-10, we evaluate with a more complex network, VGG11 (Simonyan & Zisserman, 2014). In all our experiments, we define agent utility as M Lce, where Lce refers to the average cross entropy loss on the agent s local test data. We set M to be 1.0 in our experiments. For all baselines and our algorithm, we set the number of iterations of the global model update to 50. |