RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Authors: Dingfan Chen, Ning Yu, Mario Fritz
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive evaluations on five datasets with diverse modalities (images, medical data, transaction records), our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs as well as model utility. |
| Researcher Affiliation | Collaboration | 1CISPA Helmholtz Center for Information Security 2Salesforce Research 3University of Maryland 4Max Planck Institute for Informatics |
| Pseudocode | Yes | Algorithm 1: Relax Loss |
| Open Source Code | Yes | Source code is available at https://github.com/DingfanChen/RelaxLoss. |
| Open Datasets | Yes | We set up seven target models, trained on five datasets (CIFAR-10, CIFAR-100, CHMNIST, Texas100, Purchase100) with diverse modalities... followed by citations like CIFAR-10 (Krizhevsky et al., 2009). Also mentions We use the preprocessed data provided by Shokri et al. (2017); Song & Mittal (2020)9. and footnote 9 provides https://github.com/inspire-group/membership-inference-evaluation. |
| Dataset Splits | Yes | We evenly split each dataset into five folds and use each fold as the training/testing set for the target/shadow model3, and use the last fold for training the surrogate attack model (for Jia et al. (2019); Shokri et al. (2017)). |
| Hardware Specification | Yes | Our experiments are conducted with Nvidia Tesla V100 and Quadro RTX8000 GPUs. |
| Software Dependencies | No | The paper states 'All our models and methods are implemented in Py Torch' but does not provide specific version numbers for PyTorch or any other software libraries or dependencies. |
| Experiment Setup | Yes | We apply SGD optimizer with momentum=0.9 and weight-decay=1e-4 by default. We set the initial learning rate τ = 0.1 and drop the learning rate by a factor of 10 at each decay epoch 11. We list below the decay epochs in square brackets and the total number of training epochs are marked in parentheses: CIFAR-10 and CIFAR-100 [150,225] (300); CH-MNIST [40,60] (80); Texas100 and Purchase100 [50,100] (120). |