TurboSVM-FL: Boosting Federated Learning through SVM Aggregation for Lazy Clients

Authors: Mengdi Wang, Anna Bodonhelyi, Efe Bozkir, Enkelejda Kasneci

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Turbo SVM-FL on multiple datasets including FEMNIST, Celeb A, and Shakespeare using user-independent validation with non-iid data distribution. Our results show that Turbo SVM-FL can significantly outperform existing popular algorithms on convergence rate and reduce communication rounds while delivering better test metrics including accuracy, F1 score, and MCC.
Researcher Affiliation Academia Chair for Human-Centered Technologies for Learning, Technical University of Munich, Munich, Bavaria, Germany {mengdi.wang, anna.bodonhelyi, efe.bozkir, enkelejda.kasneci}@tum.de
Pseudocode Yes A pseudocode for Turbo SVM-FL is given in Algorithm 3, and a graphical illustration is depicted Figure 1. Algorithm 1: Turbo SVM-FL part 1: selective aggregation. Algorithm 2: Turbo SVM-FL part 2: max-margin spread-out regularization. Algorithm 3: The Turbo SVM-FL Framework.
Open Source Code Yes For more details such as reproducibility and model structures, we redirect readers to the Appendix and our Git Hub repository1. 1 https://github.com/Kasneci-Lab/Turbo SVM-FL.
Open Datasets Yes We benchmarked Turbo SVM-FL on three different datasets covering data types of both image and nature language, namely FEMNIST (Le Cun 1998; Cohen et al. 2017), Celeb A (Liu et al. 2015), and Shakespeare (Shakespeare 2014; Mc Mahan et al. 2017) (Table 1). All three datasets can be acquired on LEAF (Caldas et al. 2018).
Dataset Splits Yes More specifically, we conducted 90% 10% train-test-split in a user-independent way, which means we had a held-out set of clients for validation rather than a fraction of validation data on each client (Wang et al. 2021a).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Algorithm 3: The Turbo SVM-FL Framework lists specific inputs/hyperparameters: clients n [N], client local datasets D1, ..., DN, |DG| = |D1| + ... + |DN|, number of global epochs T, number of client epochs E, number of classes K, server learning rate ηG, client learning rate η, mini-batch size B.