Recovering Labels from Local Updates in Federated Learning
Authors: Huancheng Chen, Haris Vikalo
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on several datasets, architectures, and data heterogeneity scenarios demonstrate that the proposed method consistently outperforms existing baselines, and helps improve quality of the reconstructed images in GI attacks in terms of both PSNR and LPIPS. |
| Researcher Affiliation | Academia | 1University of Texas at Austin, Texas, USA. |
| Pseudocode | Yes | Algorithm 1 RLU Algorithm 2 Posterior Search |
| Open Source Code | No | The paper does not provide any explicit statement about releasing the source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate the performance of RLU on a classification task using a variety of model architectures including Le Net-5 (Le Cun et al., 1998), VGG-16 (Simonyan & Zisserman, 2014) and Res Net-50 (He et al., 2016), and four benchmark datasets including SVHN (Netzer et al., 2011), CIFAR10, CIFAR100 and Tiny-Image Net (Le & Yang, 2015). |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits with specific percentages, sample counts, or citations to predefined splits. |
| Hardware Specification | No | The paper does not specify any particular hardware components (e.g., specific GPU or CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | Yes | We used Pytorch (Paszke et al., 2019) to implement all the described experiments. |
| Experiment Setup | Yes | The learning rate η for the SGD optimizer was set to 0.01 in all experiments reported in Table 2. In order to obtain high level of data heterogeneity, we set the concentration parameters α to 0.5 in the experiments on SVHN and CIFAR10; in the experiments on CIFAR100 and Tiny-Image Net, this parameter was set to 0.1. There are two groups of experiments in Table 2; in one the number of local epochs m was set to 1, while in the other it was set to 10. For the experiments involving multiple local epochs, the number of iterations T in Alg. 2 was set to 10. The batch size was set to 32 in all experiments on SVHN. The batch size was set to 64 and 256 in the experiments on CIFAR10 and CIFAR100, respectively. In the experiments involving Tiny-Image Net, the clients learned a standard Res Net-50 (He et al., 2016) where the batch size was set to 256. We used Adam (Kingma & Ba, 2014) optimizer and set the learning rate to 0.1. We also added the total variation regularization (Yin et al., 2021) to the objective function to create more realistic images with a weight scalar 0.2. |