You Only Query Once: An Efficient Label-Only Membership Inference Attack
Authors: YUTONG WU, Han Qiu, Shangwei Guo, Jiwei Li, Tianwei Zhang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our online and offline attacks under various settings to prove their effectiveness and efficiency. We compare our attacks with SOTA MIAs to show our superiority. Datasets. YOQO is general to different tasks. Without loss of generality, we follow (Choquette-Choo et al., 2021; Carlini et al., 2022; Li & Zhang, 2021) to test YOQO on several classical visual tasks such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), GTSRB, SVHN (Netzer et al., 2011) and Tiny-Image Net. We also test YOQO on two tabular datasets: Location (Yang et al., 2015) and Texas. |
| Researcher Affiliation | Collaboration | Yutong Wu1, Han Qiu2 , Shangwei Guo3, Jiwei Li4,5, Tianwei Zhang1 1Nanyang Technological University, 2Tsinghua University, 3Chongqing University, 4Zhejiang University, 5Shannon Ai |
| Pseudocode | Yes | Algorithm 1: Online attack |
| Open Source Code | Yes | Our code is available at https://github.com/WU-YU-TONG/YOQO. |
| Open Datasets | Yes | We test YOQO on several classical visual tasks such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), GTSRB, SVHN (Netzer et al., 2011) and Tiny-Image Net. We also test YOQO on two tabular datasets: Location (Yang et al., 2015) and Texas. |
| Dataset Splits | Yes | For ADV and PPB, we pre-train the victim model to have up to 64.3% accuracy on the validation set and then fine-tune the model for 50 epochs. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch implementation' for the Adam optimizer but does not specify version numbers for PyTorch or any other software libraries. |
| Experiment Setup | Yes | We train all the networks until they achieve more than 98% accuracy on the training set. We use Adam as the optimizer with all the hyper-parameters default in its Py Torch implementation. To conduct the online attack, we train 16 in-out model pairs and set α = 2. For the offline attack, we only use 16 out-models with γ = 5. The gradient descent algorithm is used to optimize x . For the two attacks, we set the stop threshold to 4 and 8, and the maximum number of iterations to 30 and 35 respectively. In Table A.1 we provide the training recipes of the models tested in 4. The training settings basically follow such recipe unless specially referred. Dataset Model Learning Rate Optimizer Batch Size Training Epoch CIFAR10/CIFAR100/gtsrb/svhn CNN7 0.001 Adam 128 30 VGG16 0.001 Adam 64 50 Res Net18 0.001 Adam 64 20 Dense Net121 0.001 Adam 128 30 Inception V3 0.001 Adam 128 25 Se Res Net18 0.001 Adam 128 20 Purchase100/location Column FC 0.001 Adam 32 30 |