FACT or Fiction: Can Truthful Mechanisms Eliminate Federated Free Riding?

Authors: Marco Bornstein, Amrit Singh Bedi, Abdirisak Mohamed, Furong Huang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, FACT avoids free-riding when agents are untruthful, and reduces agent loss by over 4x. and Within our experiments, 16 agents train a model individually (locally) as well as in a federated manner. Each agent uses 3,125 and 3,750 data samples each for CIFAR10 and MNIST respectively. We analyze FACT under homogeneous and heterogeneous agent data distributions.
Researcher Affiliation Collaboration Marco Bornstein University of Maryland marcob@umd.edu; Amrit Singh Bedi University of Central Florida amritbedi@ucf.edu; Abdirisak Mohamed University of Maryland SAP Labs, LLC amoham70@umd.edu; Furong Huang University of Maryland furongh@umd.edu
Pseudocode Yes Algorithm 1 PFL: Penalized Federated Learning; Algorithm 2 FACT: Federated Agent Cost Truthfulness; Algorithm 3 Agent Update
Open Source Code Yes Finally, we include code within our submission. and We provide code to reproduce our results.
Open Datasets Yes Experimental Setup. Within our experiments, 16 agents train a model individually (locally) as well as in a federated manner. Each agent uses 3,125 and 3,750 data samples each for CIFAR10 [13] and MNIST [5] respectively. and We train an image classification model on the HAM10000 [31] dataset.
Dataset Splits No The paper mentions using training and test data, but does not explicitly state the use or size of a validation set, nor specific train/validation/test split percentages.
Hardware Specification Yes We ran all experiments on a cluster of 2-4 GPUs, with the 16 CPUs (agents) pinned to a GPU. We use Ge Force GTX 1080 Ti GPUs (11GB of memory) and the CPUs used are Xeon 4216.
Software Dependencies No The paper mentions using 'Stochastic Gradient Descent' and 'Adam' as optimizers, and 'Fed Avg' algorithm, but does not provide specific version numbers for any software libraries or frameworks (e.g., PyTorch, TensorFlow) used for implementation.
Experiment Setup Yes Table 1: Hyper-parameters for CIFAR-10 Experiments. Model Batch Size Learning Rate Training Cost Epochs Local Fed Avg Steps h Res Net18 128 0.05 1.024e-07 100 6 and Table 2: Hyper-parameters for MNIST Experiments. Model Batch Size Learning Rate Training Cost Epochs Local Fed Avg Steps h CNN 128 1e-3 7.111e-08 100 6 and We use the Adam optimizer with a learning rate of 1e-3 and batch size of 128.