M$^4$I: Multi-modal Models Membership Inference
Authors: Pingyi Hu, Zihan Wang, Ruoxi Sun, Hu Wang, Minhui Xue
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that both attack methods can achieve strong performances. |
| Researcher Affiliation | Collaboration | Pingyi Hu University of Adelaide Australia Zihan Wang University of Adelaide Australia Ruoxi Sun CSIRO s Data61 Australia Hu Wang University of Adelaide Australia Minhui Xue CSIRO s Data61 Australia |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of M4I attacks is publicly available at https://github.com/Multimodal MI/ Multimodal-membership-inference.git. |
| Open Datasets | Yes | Our experiments are conducted on three different datasets: MSCOCO [70], FLICKR8k [71], and IAPR TC-12 [72], as detailed in Supplementary Materials. |
| Dataset Splits | No | The paper specifies training data sizes (3,000 image-text pairs) and non-member ground truth data (3,000 image-text pairs) for target model evaluation, and similar splits for shadow models. However, it does not explicitly mention a dedicated validation set split for hyperparameter tuning of the main models or attack models. |
| Hardware Specification | No | The paper states, 'We provide the computation resource used in our experiment in Supplementary Materials,' but does not specify any hardware details (like GPU/CPU models) within the main paper content. |
| Software Dependencies | No | The paper mentions using 'Py Torch models' and 'Opacus', but does not provide specific version numbers for these software components. |
| Experiment Setup | No | The paper states that 'Our training method follows the settings in the work [3]', and describes model architectures like 'Resnet-152 as well as VGG-16, with LSTM' and 'stochastic gradient descent' for parameter optimization, but it does not provide specific hyperparameter values such as learning rates, batch sizes, or number of epochs within the main text. |