SRATTA: Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning.

Authors: Tanguy Marchand, Regis Loeb, Ulysse Marteau-Ferey, Jean Ogier Du Terrail, Arthur Pignet

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that SRATTA is both theoretically grounded and can be used in practice on realistic models and datasets. We also provide a Python implementation of SRATTA, and of the proposed defensive schemes.
Researcher Affiliation Industry Tanguy Marchand 1 Regis Loeb* 1 Ulysse Marteau-Ferey* 1 Jean Ogier du Terrail* 1 Arthur Pignet* 1 1Owkin Inc.. Correspondence to: Tanguy Marchand <tanguy.marchand@owkin.com>.
Pseudocode Yes Algorithm 1 Fed Avg, Algorithm 2 Local Update, Algorithm 3 SRATTA, Algorithm 4 Local Update Defended
Open Source Code Yes We also provide a Python implementation of SRATTA, and of the proposed defensive schemes. The code for SRATTAis available at https://github.com/owkin/SRATTA.
Open Datasets Yes Dataset used We perform SRATTA on four different datasets. Two of them are image datasets: CIFAR10 (Krizhevsky et al., 2009) and Fashion MNIST (Xiao et al., 2017). One is a binary dataset, the Primate Splice Junction Gene Sequences (hereafter DNA dataset) dataset available in the Open ML suite (Vanschoren et al., 2014). The final dataset is a multi-modal and multi-centric version of the TCGA-BRCA (Tomczak et al., 2015; Ogier du Terrail et al., 2022) dataset, containing binary, discrete and continuous entries.
Dataset Splits No The paper mentions 'training sequences' and 'test' data in various sections, but does not explicitly provide details about a 'validation' dataset split (e.g., percentages, sample counts, or specific methodology for validation).
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments; no specific GPU models, CPU types, or detailed computing environments are mentioned.
Software Dependencies No The paper mentions a 'Python implementation' but does not specify version numbers for Python itself or any other software libraries or dependencies used in the experiments.
Experiment Setup Yes G. Hyper-parameters used in numerical experiments Table 4 lists the hyper-parameters used in the different numerical experiments shown in this paper. Figure Dataset # centers #Dk # batch # hid. neur. nupdates tmax # trainings lr Figure 1 CIFAR10 5 100 8 1000 5 20 20 0.1 Figure 1 FMNIST 5 100 8 1000 5 20 20 0.5 Figure 1 DNA 5 100 8 1000 5 20 20 1.0 Figure 1 TCGA 5 100 8 1000 5 20 20 0.8