A Primal-Dual Algorithm for Hybrid Federated Learning

Authors: Tom Overman, Garrett Blum, Diego Klabjan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, we provide experimental results that demonstrate the performance improvements of the algorithm over a commonly used method in federated learning, Fed Avg, and an existing hybrid FL algorithm, Hy FEM.
Researcher Affiliation Academia 1Department of Engineering Sciences and Applied Mathematics, Northwestern University 2Department of Mechanical Engineering, Northwestern University 3Department of Industrial Engineering and Management Sciences, Northwestern University tomoverman2025@u.northwestern.edu, garrettblum2024@u.northwestern.edu, d-klabjan@northwestern.edu
Pseudocode Yes Algorithm 1: Hy FDCA Initialize α0 = 0, w0 = 0, and ˆw0 = 0. Set ipk,i = 0 for every client k and i Ik. for t=1,2,...T do Given Kt, the subset of clients available in the given iteration Find Kt = {k : k Kt and k / Kt 1} Send enc(αt 0) to clients k Kt Primal Aggregation(Kt) Secure Inner Product(Kt) for all clients k Kt do αt k=Local Dual Method(αt 1 k , wt 1 k , x T i wt 1 0 ) Send all enc( γt |Bn| αt k) to server end for for n=1,2,...,N do enc(αt 0,n) = enc(αt 1 0,n ) + P b Bn enc( γt |Bn| αt b,n) end for Send enc(αt 0) to clients k Kt and clients decrypt Primal Aggregation(Kt) Secure Inner Product(Kt) end for
Open Source Code No The paper does not provide a direct link or explicit statement about open-sourcing the code for the described methodology.
Open Datasets Yes Three datasets are selected. MNIST is a database of handwritten digits (Deng 2012). News20 binary is a class-balanced two-class variant of the UCI 20 newsgroup dataset, a text classification dataset (Chang and Lin 2011). Finally, Covtype binary is a binarized dataset for predicting forest cover type from cartographic variables (Chang and Lin 2011).
Dataset Splits No The paper mentions "validation accuracy" and that "the value of λ that resulted in the highest validation accuracy is employed" for tuning, implying a validation set was used. However, it does not provide specific percentages, counts, or a detailed methodology for the dataset splits (training, validation, test).
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers for its implementation (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup Yes The regularization parameter, λ, is found by tuning via a centralized model where the value of λ that resulted in the highest validation accuracy is employed. The resulting choices of λ are λMNIST = 0.001, λNews20 = 1 10 5, and λcovtype = 5 10 5. ... For Hy FDCA, we only need to tune the number of inner iterations.