FedTrans: Client-Transparent Utility Estimation for Robust Federated Learning
Authors: Mingkun Yang, Ran Zhu, Qing Wang, Jie Yang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation results demonstrate that leveraging Fed Trans to select the clients can improve the accuracy performance (up to 7.8%), ensuring the robustness of FL in noisy scenarios 1. 1 INTRODUCTION |
| Researcher Affiliation | Academia | Mingkun Yang , Ran Zhu , Qing Wang, Jie Yang Department of Software Technology Delft University of Technology {m.yang-3,r.zhu-1,qing.wang,j.yang-3}@tudelft.nl |
| Pseudocode | Yes | Algorithm 1 Variational Utility Inference Require: Local updates {W i,j}j J i, global model W i 1, Round-Reputation Matrix R, Server auxiliary dataset Da |
| Open Source Code | Yes | Code is available at https://github.com/Ran-ZHU/Fed Trans |
| Open Datasets | Yes | We use two widely-used image datasets: CIFAR10 (Krizhevsky et al., 2009) and Fashion-MNIST (FMNIST) (Xiao et al., 2017). |
| Dataset Splits | No | The paper mentions non-IID and IID settings and how data is distributed among clients, but does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | Yes | We implement all the comparison methods in Python and the neural networks with Py Torch, running on an NVIDIA 2080Ti GPU. |
| Software Dependencies | No | The paper mentions using Python and PyTorch for implementation but does not specify version numbers for these or other software dependencies. |
| Experiment Setup | Yes | In local training, local epochs are set to 5 and the learning rate is 1e 2. We use SGD with momentum factor = 0.9 as the local optimizer. We adopt f Wd with Multi-Layer Perception (MLP) having 2 hidden layers of 128 and 64 dimensions respectively. In discriminator training, we select the learning rate as 1e 3, and we set the priors A and B by sampling from a uniform distribution [0, 10] and update them in E-step according to Theorem 2.1 and Theorem 2.2. |