AegisFL: Efficient and Flexible Privacy-Preserving Byzantine-Robust Cross-silo Federated Learning

Authors: Dong Chen, Hongyuan Qu, Guangwu Xu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct extensive experiments on different datasets and adversary settings, which also confirm the effectiveness and efficiency of our scheme.
Researcher Affiliation Academia 1School of Cyber Science and Technology, Shandong University, Qingdao, China 2Key Laboratory of Cryptologic Technology and Information Security of Ministry of Education, Shandong University, Qingdao, China 3Shandong Institute of Blockchain, Jinan, China 4Quan Cheng Laboratory, Jinan, China.
Pseudocode Yes Algorithm 1 Local training
Open Source Code No The paper does not contain an explicit statement about the release of source code or a link to a code repository.
Open Datasets Yes We adopt two widely used datasets HAR and MNIST.
Dataset Splits Yes In our experiment, each user is considered as a distinct client, with 75% of their data utilized for training purposes and the remaining 25% serving as test cases. ... The MNIST dataset ... consists of 60,000 training images and 10,000 test images...
Hardware Specification Yes Our experiments run on a Windows PC with Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz and 16 GB memory.
Software Dependencies No The paper mentions software like PyTorch, SEAL library, and SEAL-Python library, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes We set N = 8192, = 240 and QL as a 200-bit number, achieving 128-bit security. ... For HAR, we set the number of global iterations T = 100, the number of local iterations ψ = 50, and the batch size |B| = 100. ... For MNIST, we set the number of global iterations T = 100, the number of local iterations ψ = 50, and the batch size |B| = 100.