Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Proportional Fairness in Federated Learning
Authors: Guojun Zhang, Saber Malekmohammadi, Xi Chen, Yaoliang Yu
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on vision and language datasets, we demonstrate that Prop Fair can approximately find PF solutions, and it achieves a good balance between the average performances of all clients and of the worst 10% clients. Our code is available at https://github.com/huawei-noah/ Federated-Learning/tree/main/Fair FL. 5 Experiments In this section, we verify properties of Prop Fair by answering the following questions: (1) can Prop Fair achieve proportional fairness as in eq. 3.3? (2) what balance does Prop Fair achieve between the average and worst-case performances? We report them separately in Section 5.2 and Section 5.3. |
| Researcher Affiliation | Collaboration | Guojun Zhang, Saber Malekmohammadi, Xi Chen EMAIL Huawei Noah s Ark Lab Yaoliang Yu EMAIL University of Waterloo |
| Pseudocode | Yes | Algorithm 1: Prop Fair 1 Input: global epoch T, client number n, loss function fi for client i, number of samples ni for client i, initial global model θ0, local step number Ki, baseline M, threshold ϵ, pi = ni/N, batch size m, learning rate η |
| Open Source Code | Yes | Our code is available at https://github.com/huawei-noah/ Federated-Learning/tree/main/Fair FL. |
| Open Datasets | Yes | Datasets. We follow standard benchmark datasets as in the existing literature, including CIFAR-{10, 100} (Krizhevsky et al., 2009), Tiny Image Net (Le & Yang, 2015) and Shakespeare (Mc Mahan et al., 2017). |
| Dataset Splits | Yes | For each of the client dataset, we split it further into 80% training data and 20% test data. This reflects the real scenario, where each client evaluates the performance by itself. ... Also, each client s dataset is split into 50% for training and 50% for test. |
| Hardware Specification | No | The paper does not contain specific details about the hardware used for running its experiments. |
| Software Dependencies | No | The paper mentions software like Flower (Beutel et al., 2020) and Tensorflow Federated, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We fix the batch size to be 64. ... For Prop Fair we fix ϵ = 0.2 and tune M (Algorithm 1) from M = 2, 3, 4, 5. ... The best values of hyperparameters used for different datasets, chosen based on grid search. ... The best learning rates used for different datasets and algorithms, based on grid search. |