Dynamic Personalized Federated Learning with Adaptive Differential Privacy

Authors: Xiyuan Yang, Wenke Huang, Mang Ye

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on CIFAR-10, FEMNIST and SVHN dataset demonstrate the effectiveness of our approach in achieving better performance and robustness against clipping, under personalized federated learning with differential privacy.
Researcher Affiliation Academia National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, Hubei Key Laboratory of Multimedia and Network Communication Engineering, School of Computer Science, Hubei Luojia Laboratory, Wuhan University, Wuhan, China.
Pseudocode Yes Algorithm 1: The Proposed Method: Fed DPA
Open Source Code Yes https://github.com/xiyuanyang45/Dynamic PFL
Open Datasets Yes Our method is evaluated on two classification tasks, FEMNIST [6], CIFAR-10 [21] and SVHN [34], embodying real-world non-IID and privacy-constrained scenarios.
Dataset Splits No No specific train/validation/test splits (e.g., percentages or absolute counts) are explicitly provided. The paper mentions partitioning CIFAR-10 into 10 subsets via Dirichlet distribution for non-IID data among clients, and that accuracy is measured on clients' respective datasets, but does not detail how each client's dataset is split for training, validation, and testing.
Hardware Specification Yes All experiments were implemented in Python with Py Torch on an NVIDIA 3090 GPU.
Software Dependencies No The paper mentions 'Python with Py Torch' and 'Opacus' but does not specify version numbers for these software components. For example, it does not state 'PyTorch 1.9' or 'Opacus X.Y.Z'.
Experiment Setup Yes For all dataset FEMNIST, CIFAR-10 and SVHN, we set the learning rate to 1e-3 and optimize hyperparameters τ, λ1, and λ2 through grid search in {0.05, 0.1, 0.3, 0.5}. We use global epochs of 30 and 40, local epochs of 3 and 4, and batch sizes of 16 and 64 for FEMNIST and CIFAR-10, respectively.