Instance-wise Batch Label Restoration via Gradients in Federated Learning
Authors: Kailang Ma, Yu Sun, Jian Cui, Dawei Li, Zhenyu Guan, Jianwei Liu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental evaluations reach over 99% Label existence Accuracy (Le Acc) and exceed 96% Label number Accuracy (Ln Acc) in most cases on three image datasets and four untrained classification models. |
| Researcher Affiliation | Academia | Kailang Ma , Yu Sun , Jian Cui, Dawei Li, Zhenyu Guan, Jianwei Liu School of Cyber Science and Technology, Beihang University, China {makailang,sunyv,cuijianw,lidawei, guanzhenyu,liujianwei}@buaa.edu |
| Pseudocode | Yes | Algorithm 1 provides a pseudo-code for the complete procedure of our method. |
| Open Source Code | Yes | Our code is available at https://github.com/BUAA-CST/iLRG. |
| Open Datasets | Yes | We evaluate our method for the classification task on three classic image datasets ... MNIST dataset ... CIFAR100 dataset ... Image Net ILSVRC 2012 dataset (Deng et al., 2009) |
| Dataset Splits | No | The paper does not provide specific training, validation, and test dataset splits (e.g., percentages or sample counts) needed for reproduction. It mentions using the training set but does not detail the splits. |
| Hardware Specification | No | The paper discusses models like FCN-3, Le Net-5, VGG-16, and Res Net-50/152, but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | We perform attacks on batches of size 64 and 8 on MNIST and CIFAR 100 with the FCN-3 and VGG-16 models, respectively. ... Our attack is more effective on untrained models. Therefore, unless otherwise specified, we focus on the untrained model. ... Our method ... works on large batch sizes of up to 4096. |