GuardHFL: Privacy Guardian for Heterogeneous Federated Learning

Authors: Hanxiao Chen, Meng Hao, Hongwei Li, Kangjie Chen, Guowen Xu, Tianwei Zhang, Xilin Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations demonstrate that Guard HFL significantly outperforms the alternative instantiations based on existing state-of-the-art techniques in both runtime and communication cost. ... 4. Evaluation Datasets and models. We evaluate Guard HFL on three image datasets (SVHN, CIFAR10 and Tiny Image Net). ... Table 1. Extra runtime (sec) of Guard HFL over vanilla HFL systems in the plaintext environment.
Researcher Affiliation Academia 1University of Electronic Science and Technology of China, China 2This work was done at NTU as a visiting student. 3Nanyang Technological University, Singapore.
Pseudocode Yes Algorithm 1 The Guard HFL framework... Algorithm 2 Secure MSB Protocol Πmsb
Open Source Code No The paper does not contain any explicit statements about releasing code for the described methodology or provide a link to a code repository.
Open Datasets Yes We evaluate Guard HFL on three image datasets (SVHN, CIFAR10 and Tiny Image Net). ... CIFAR10 consists of 60,000 32 32 RGB images in 10 classes. There are 50,000 training images and 10,000 test images.
Dataset Splits No The paper mentions '50,000 training images and 10,000 test images' for CIFAR10, but it does not explicitly specify a separate validation dataset split.
Hardware Specification Yes Each of the entities, i.e., PQ, PA, and the server, is run on the Ubuntu 18.4 system with Intel(R) 562 Xeon(R) CPU E5-2620 v4(2.10 GHz) and 16 GB of RAM and NVIDIA 1080Ti GPU.
Software Dependencies No The paper mentions 'Ubuntu 18.4 system' and notes that the scheme 'is friendly with GPUs and can be processed by highly-optimized CUDA kernels'. However, it does not provide specific version numbers for key software dependencies like Python, PyTorch, TensorFlow, or other relevant libraries.
Experiment Setup Yes Following existing works (Rathee et al., 2020; Tan et al., 2021), we set the secret-sharing protocols over a 64-bit ring Z264, and encode inputs using a fixed-point representation with 20-bit precision. The security parameter κ is 128 in the instantiation of PRFs. ... For SVHN, CIFAR10, and Tiny Image Net, the loss function is cross-entropy with the learning rate of 0.5, 0.1, 0.01, respectively. Besides, the batch size is 256, 64 and 64, respectively. When the clients retrain the local model at the local retraining step, they use Adam optimizer for 50 epochs with learning rate of 2e-3 decayed by a factor of 0.1 on 25 epochs, where the batch size is 256 on SVHN, and 64 on both CIFAR10 and Tiny Image Net.