Against Membership Inference Attack: Pruning is All You Need
Authors: Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu, Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also verify our theoretical insights with experiments. Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly. |
| Researcher Affiliation | Academia | Yijue Wang1 , Chenghong Wang2 , Zigeng Wang1 , Shanglin Zhou1 , Hang Liu 3 , Jinbo Bi 1 , Caiwen Ding 1 , Sanguthevar Rajasekaran 1 1University of Connecticut 2Duke University 3Stevens Institute of Technology |
| Pseudocode | Yes | Algorithm 1 The Process of MIA-Pruning |
| Open Source Code | No | The paper does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We use Le Net-5 on MNIST dataset, and VGG-16, Mobile Net V2 and Res Net18 to classify CIFAR-10 and CIFAR-100 dataset. We also use Mobile Net V2 and Res Net-18 models on the Image Net dataset to show the scalability of our proposed method. |
| Dataset Splits | Yes | The empirical gain can be calculated by simply sampling data from the training set and validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The detailed setting of training can be found in Appendix Section 4. The detail of pruning rate settings is in Appendix Section 6. |