Improving Robustness to Model Inversion Attacks via Mutual Information Regularization

Authors: Tianhao Wang, Yuheng Zhang, Ruoxi Jia11666-11673

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that MID leads to state-of-the-art performance for a variety of MI attacks, target models and datasets.
Researcher Affiliation Academia 1 Harvard University, 2 University of Illinois Urbana-Champaign, 3 Virginia Tech
Pseudocode No The paper describes algorithmic ideas, such as modifying ID3, but does not present them in pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of open-source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Table 1: Summary of experimental settings. Dataset: IPWC, Five Thirty Eight, Face Scrub, CIFAR10, Celeb A.
Dataset Splits No The paper mentions 'different training and test data split' and states, 'When the target model is linear regression and a decision tree, we generate 100 models with different training and test data split and average the utility and privacy results over different models for each defense strategy and hyperparameter setting.' However, it does not specify explicit percentages or counts for training, validation (which is not mentioned), or test sets, nor does it detail how these splits are generated for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions techniques and algorithms like 'ID3' and 'DPSGD' (related to deep learning), but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup No The paper states, 'We leave the description of datasets, attack hyper-parameters, model architectures as well as the details of the evaluation process and metrics to the Appendix.' While it mentions 'hyperparameters,' these concrete values are not provided within the main text, making the setup not fully reproducible from the main paper.