Calibrated One Round Federated Learning with Bayesian Inference in the Predictive Space

Authors: Mohsin Hasan, Guojun Zhang, Kaiyang Guo, Xi Chen, Pascal Poupart

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method is evaluated on a variety of regression and classification datasets to demonstrate its superiority in calibration to other baselines, even as data heterogeneity increases.
Researcher Affiliation Collaboration Mohsin Hasan1,2, Guojun Zhang3, Kaiyang Guo3, Xi Chen3, Pascal Poupart1,2 1University of Waterloo 2Vector Institute 3Huawei Noah s Ark Lab
Pseudocode Yes Algorithm 1: Distilled β-Pred Bayes
Open Source Code Yes Code available at https://github.com/hasanmohsin/beta Pred Bayes FL.
Open Datasets Yes The method was evaluated for classification on the following datasets: MNIST (Lecun et al. 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), EMNIST (Cohen et al. 2017) (using a split with 62 classes), CIFAR10 and CIFAR100 (Krizhevsky, Hinton et al. 2009). ... The regression datasets used for evaluation include: the wine quality (Cortez et al. 2009), air quality (De Vito et al. 2008), forest fire (Cortez and Morais 2007), real estate (Yeh and Hsu 2018), and bike rental (Fanaee-T and Gama 2013) datasets from the UCI repository (Dua and Graff 2017).
Dataset Splits Yes All tests used 5 clients, with a distillation set composed of 20% of the original training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup No Further experimental details, such as the models, and hyperparameters, are included in the appendix.