HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning

Authors: Momin Ahmad Khan, Yasra Chandio, Fatima Anwar

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation show that HYDRA-FL significantly boosts accuracy over Fed NTD and MOON in attack settings while maintaining performance in benign settings.
Researcher Affiliation Academia Momin Ahmad Khan University of Massachusetts, Amherst makhan@umass.edu Yasra Chandio University of Massachusetts, Amherst ychandio@umass.edu Fatima Muhammad Anwar University of Massachsuetts, Amherst fanwar@umass.edu
Pseudocode No The paper describes the algorithms and their modifications but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/momin-ahmad-khan/HYDRA-FL.
Open Datasets Yes Datasets and Models: We conduct our experiments over three popular datasets: MNIST, CIFAR10, and CIFAR100. ... [20] [18]
Dataset Splits Yes CIFAR10 [18]: CIFAR10 is a 10-class classification task with 60,000 total RGB images, each of size 32x32. Each class has 6000 training images and 1000 testing images.
Hardware Specification Yes We used Py Torch [37] for our implementation on an 8GB NVIDIA RTX 3060 Ti GPU.
Software Dependencies No The paper mentions 'Py Torch [37]' but does not specify a version number for it or any other software dependency.
Experiment Setup Yes For Fed NTD, we use 100 clients with a sampling ratio of 0.1, i.e., 10 clients are selected every round. We use momentum SGD with an initial learning rate of 0.1, weight decay of 1e5, batch size of 50, and momentum of 0.9. Each run consists of 200 rounds with 5 local epochs.