Bayesian Federated Neural Matching That Completes Full Information

Authors: Peng Xiao, Samuel Cheng

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of our approach is demonstrated by experiments on both image classification and semantic segmentation tasks. The experiments below indicate that our algorithm can aggregate multiple neural networks into a more efficient global neural network.
Researcher Affiliation Academia 1Department of Computer Science and Technology, Tongji University Shanghai, China 2School of Electrical and Computer Engineering, University of Oklahoma Oklahoma City, US
Pseudocode Yes Algorithm 1: The process of NAFI
Open Source Code Yes 1https://github.com/Xiao Peng24/NAFI
Open Datasets Yes Our algorithm is evaluated on three datasets: MINST, CIFAR 10 and Carvana Image Masking Challenge (CIMC). MINST and CIFAR-10 are image classification datasets and each contains ten classes on handwriting digits and objects in real life respectively.
Dataset Splits No The paper describes partition strategies for local data using Dirichlet distribution, but does not provide specific details on train/validation/test dataset splits (e.g., exact percentages or sample counts for a validation set), nor does it explicitly mention the use of a validation set for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU models, or memory amounts used for running its experiments. It only mentions using PyTorch for implementation.
Software Dependencies No The paper mentions using Py Torch, Adam, SGD, and RMSprop, but it cites research papers rather than providing specific version numbers for these software components or libraries, which is necessary for reproducible software dependencies.
Experiment Setup Yes Training setup: We use Py Torch (Paszke et al. 2019) to implement these networks and train them by the Adam (Kingma and Ba 2014), SGD (Bottou 2010) and RMSprop (Hinton, Srivastava, and Swersky 2012) with hyperparameter settings which are summarized in Table 1. Table 1: Hyperparameter settings for training neural networks. Model: FCNN, Conv Net, U-net. Optimizer: Adam, SGD, RMSprop. Learning rate: 0.01. Size of minibatch: 32, 32, 3. Epochs: 10, 10, 3.