Fairness in Federated Learning via Core-Stability

Authors: Bhaskar Ray Chaudhury, Linyi Li, Mintong Kang, Bo Li, Ruta Mehta

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we empirically validate our analysis on two real-world datasets, and we show that Core Fed achieves higher core-stable fairness than Fed Avg while maintaining similar accuracy. We evaluate our fair ML method Core Fed and baseline Fed Avg [21] on three datasets (Adult, MNIST and CIFAR-10) on linear model and deep neural networks.
Researcher Affiliation Academia Bhaskar Ray Chaudhury Linyi Li Mintong Kang Bo Li Ruta Mehta University of Illinois at Urbana Champaign
Pseudocode Yes We call our Algorithm as Core Fed (Fully outlined in Algorithm 1 in the appendix).
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes We evaluate our algorithm Core Fed on Adult [2], MNIST [17] and CIFAR-10 [16] datasets.
Dataset Splits No The paper mentions data construction for non-IID settings but does not explicitly provide specific percentages, sample counts, or methodology for training, validation, or test dataset splits.
Hardware Specification Yes All experiments are conducted on a 1080 Ti GPU.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes For Adult dataset, the utitility is selected as M ℓlog where ℓlog is the logistic loss. For CIFAR-10 and MNIST, we use cross entropy loss ℓce as the training loss with utility U becomes M ℓce. M is set to be 3.0, 1.0 and 3.0 for Adult, MNIST, and CIFAR-10, respectively, based on statistical analysis during training. We use a CNN, which has two 5x5 convolution layers followed by 2x2 max pooling and two fully connected layer with Re LU activation for MNIST and CIFAR-10.