Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization

Authors: Rui Hu, Yanmin Gong, Yuanxiong Guo

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.
Researcher Affiliation Academia Rui Hu , Yanmin Gong and Yuanxiong Guo The University of Texas at San Antonio {rui.hu, yanmin.gong, yuanxiong.guo}@utsa.edu
Pseudocode Yes Algorithm 1 The Fed-SPA Algorithm
Open Source Code No The paper provides a link to its full version on arXiv (https://arxiv.org/abs/2008.01558) but does not state that source code for the described methodology is publicly available.
Open Datasets Yes We explore two widely-used benchmark datasets in FL: MNIST [Le Cun et al., 1998] and CIFAR-10 [Krizhevsky et al., 2009].
Dataset Splits No The paper mentions training and testing examples but does not explicitly describe a validation dataset split (e.g., specific percentages or counts for a validation set).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes We set the number of local iterations τ = 300 for MNIST and τ = 50 for CIFAR-10. The details of other hyperparameter settings are given in the full version.