SPEAR: Exact Gradient Inversion of Batches in Federated Learning
Authors: Dimitar I. Dimitrov, Maximilian Baader, Mark Müller, Martin Vechev
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically evaluate the effectiveness of SPEAR on MNIST [13], CIFAR-10 [14], TINYIMAGENET [15], and IMAGENET [16] across a wide range of settings. In addition to the reconstruction quality metrics PSNR and LPIPS, commonly used to evaluate gradient inversion attacks, we report accuracy as the portion of batches for which we recovered the batch up to numerical errors and the number of sampled submatrices (number of iterations). |
| Researcher Affiliation | Collaboration | Dimitar I. Dimitrov1, Maximilian Baader2, Mark Niklas Müller2,3, Martin Vechev2 1 INSAIT, Sofia University "St. Kliment Ohridski" 2 ETH Zurich 3 Logic Star.ai {dimitar.iliev.dimitrov}@insait.ai {mbaader, mark.mueller, martin.vechev}@inf.ethz.ch |
| Pseudocode | Yes | We formalize our gradient inversion attack SPEAR in Alg. 1 and outline it below. |
| Open Source Code | Yes | A highly parallelized GPU implementation of SPEAR, which we empirically demonstrate to be effective across a wide range of settings and make publicly available on Git Hub. |
| Open Datasets | Yes | In this section, we empirically evaluate the effectiveness of SPEAR on MNIST [13], CIFAR-10 [14], TINYIMAGENET [15], and IMAGENET [16] across a wide range of settings. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits with percentages or absolute counts for its experiments. |
| Hardware Specification | No | The paper states 'We provide an efficient GPU implementation' and 'run all experiments on CIFAR-10 batches', but does not specify the exact GPU models, CPU models, or other detailed hardware specifications used for the experiments. |
| Software Dependencies | No | For all experiments, we use our highly parallelized Py Torch [17] GPU implementation of SPEAR. |
| Experiment Setup | Yes | Experimental Setup For all experiments, we use our highly parallelized Py Torch [17] GPU implementation of SPEAR. Unless stated otherwise, we run all experiments on CIFAR-10 batches of size b = 20 using a 6 layer Re LU-activated FCNN with width m = 200 and set τ to achieve a false rejection rate of pfr 10 5. |