Trap-MID: Trapdoor-based Defense against Model Inversion Attacks

Authors: ZhenTing Liu, ShangTse Chen

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental empirical experiments demonstrate the state-of-the-art defense performance of Trap-MID against various MI attacks without the requirements for extra data or large computational overhead. Our source code is publicly available at https://github.com/ntuaislab/Trap-MID.
Researcher Affiliation Academia Zhen-Ting Liu National Taiwan University r11922034@csie.ntu.edu.tw Shang-Tse Chen National Taiwan University stchen@csie.ntu.edu.tw
Pseudocode Yes Algorithm 1 outlines the training process.
Open Source Code Yes Our source code is publicly available at https://github.com/ntuaislab/Trap-MID.
Open Datasets Yes We use the Celeb A dataset [30], which contains 202,599 facial images of 10,177 identities, for facial recognition. ... The datasets we used in our experiments are all publicly accessible, including: Celeb A. Celeb A [30]... FFHQ. FFHQ [38]...
Dataset Splits No The paper mentions dividing the private dataset into training and testing datasets, but it does not explicitly specify a validation dataset split, its size, or how it was used in relation to the training/test splits.
Hardware Specification Yes All experiments were conducted on an Intel Xeon Gold 6226R CPU with an NVIDIA RTX A6000 GPU.
Software Dependencies No The paper mentions that source code is available but does not list specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x) in the text.
Experiment Setup Yes All models were trained using the SGD optimizer with a batch size of 64, a learning rate of 0.01, a momentum value of 0.9, and a weight decay value of 0.0001. ... Typically, we used a blend ratio α = 0.02 and a trapdoor loss weight β = 0.2. Trapdoor triggers were randomly initialized using a uniform distribution within [0, 1] and then updated with a step size ϵ = 0.01.