Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning
Authors: Jinhyun So, Ramy E. Ali, Başak Güler, Jiantao Jiao, A. Salman Avestimehr
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on MNIST, CIFAR-10 and CIFAR100 datasets in the IID and the non-IID settings demonstrate the performance improvement over the baselines in terms of privacy protection and test accuracy. |
| Researcher Affiliation | Collaboration | Jinhyun So*1, Ramy E. Ali 1, Bas ak G uler2, Jiantao Jiao3, A. Salman Avestimehr1 1 University of Southern California (USC) 2 University of California, Riverside 3 University of California, Berkeley jinhyun.so@samsung.com, ramy.ali@samsung.com, bguler@ece.ucr.edu, jiantao@eecs.berkeley.edu, avestime@usc.edu |
| Pseudocode | Yes | We describe the two components of Multi-Round Sec Agg in detail in Algorithms 1 and 2 in (So et al. 2021b, App. D). |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the methodology described in this specific version. |
| Open Datasets | Yes | MNIST (Le Cun, Cortes, and Burges 2010), CIFAR-10, and CIFAR100 (Krizhevsky and Hinton 2009) |
| Dataset Splits | No | The paper describes how training samples are partitioned across users (IID and Non-IID settings) but does not explicitly provide details about train/validation/test dataset splits with percentages or counts. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU, CPU models, memory) used for conducting the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | The hyperparameters are provided in (So et al. 2021b, App. F). |