Incentives in Federated Learning: Equilibria, Dynamics, and Mechanisms for Welfare Maximization
Authors: Aniket Murhekar, Zhuowen Yuan, Bhaskar Ray Chaudhury, Bo Li, Ruta Mehta
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical validation on MNIST and CIFAR-10 substantiates our theoretical analysis. |
| Researcher Affiliation | Academia | Aniket Murhekar1 Zhuowen Yuan1 Bhaskar Ray Chaudhury1 Bo Li1 Ruta Mehta1 1University of Illinois, Urbana-Champaign |
| Pseudocode | Yes | We present the full description of the algorithm for Fed BR-BG and Fed BR as Algorithm 1 and Algorithm 2 in Appendix C, respectively. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of its source code or a link to a code repository. |
| Open Datasets | Yes | We perform the evaluation on the MNIST (Le Cun et al. [2010]) and CIFAR-10 (Krizhevsky [2009]) datasets. |
| Dataset Splits | No | The paper states 'each agent has 100 training images and 10 testing images' but does not specify a validation split or how validation was handled. |
| Hardware Specification | No | The paper does not specify the hardware used for running the experiments. It mentions model architectures (CNN, VGG11) but not the underlying computational resources. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | We set global learning rate η to 1.0, local learning rate α to 0.01, and momentum to 0.9. We set the number of contribution updating steps to 100 and the sample number interval to 10. |