EMGAN: Early-Mix-GAN on Extracting Server-Side Model in Split Federated Learning
Authors: Jingtao Li, Xing Chen, Li Yang, Adnan Siraj Rakin, Deliang Fan, Chaitali Chakrabarti
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical experiments show the effectiveness of EMGAN. Using the VGG-11 architecture on the CIFAR-10 classification task, with a client-side model consisting of 6 layers, our results demonstrate significant improvements over previous methods. EMGAN achieves excellent results in extracting server-side models. With only 50 training samples, EMGAN successfully extracts a 5-layer server-side model of VGG-11 on CIFAR-10, with 7% less accuracy than the target model. With zero training data, the extracted model achieves 81.3% accuracy, which is significantly better than the 45.5% accuracy of the So TA method. |
| Researcher Affiliation | Collaboration | Jingtao Li1*, Xing Chen2, Li Yang3, Adnan Siraj Rakin4, Deliang Fan5, Chaitali Chakrabarti2 1Sony AI 2Arizona State University 3University of North Carolina at Charlotte 4Binghamton University (SUNY) 5Johns Hopkins University |
| Pseudocode | Yes | Algorithm 1: Proper Mix Method Algorithm 2: EMGAN during SFL Training |
| Open Source Code | Yes | The code is available at https://github.com/zlijingtao/SFL-MEA. |
| Open Datasets | Yes | We primarily use CIFAR-10, CIFAR-100 (Krizhevsky, Hinton et al. 2009), SVHN (Netzer et al. 2011), and Image Net-12 datasets (a subset of (Deng et al. 2009) used in (Li et al. 2023a), as they are used extensively in AI research. |
| Dataset Splits | No | The paper mentions |
| Hardware Specification | Yes | All experiments are conducted on a single RTX-3090 GPU. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers beyond |
| Experiment Setup | Yes | For model training, we set the total number of epochs to be 200. To perform MEAs, the attacker uses an SGD optimizer with a learning rate of 0.02 with decay (multiply learning rate by 0.2 at epochs 60, 120 and 160) to train the surrogate model and the generator. If not specified, we use a 5-client setting, where one of the clients is an attacker, and the other four clients are benign. For Multi GAN, we set NG to be 10. And for Proper Mix, we empirically set αmin, αmax to be 0.4 and 0.6 for EMGAN with-data, and set αmin, αmax to be 0.6 and 0.8 for data-free EMGAN. |