Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning

Authors: Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimentation across multiple datasets shows that our Chameleon attack consistently outperforms previous label-only MI attacks, with improvements in TPR at 1% FPR ranging up to 17.5. We perform experiments on four different datasets: three computer vision datasets (GTSRB, CIFAR-10 and CIFAR-100) and one tabular dataset (Purchase-100).
Researcher Affiliation Academia Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman Khoury College of Computer Sciences Northeastern University Boston, MA 02115, USA {chaudhari.ha, severi.g, a.oprea, j.ullman}@northeastern.edu
Pseudocode Yes Algorithm 1 Adaptive Poisoning Strategy. Algorithm 2 Adaptive poisoning strategy on a set of challenge points.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We perform experiments on four different datasets: three computer vision datasets (GTSRB, CIFAR-10 and CIFAR-100) and one tabular dataset (Purchase-100).
Dataset Splits No The paper mentions 'training set' and 'test' (or 'challenge points' for attack evaluation) but does not explicitly describe a separate 'validation' split for hyperparameter tuning or early stopping during model training.
Hardware Specification Yes We present the average running time for both attacks, on a machine with an AMD Threadripper 5955WX and a single NVIDIA RTX 4090.
Software Dependencies No The paper mentions 'PyTorch s differential privacy library, Opacus (Yousefpour et al., 2021)' but does not specify version numbers for PyTorch, Opacus, or any other software dependencies, making it difficult to reproduce the exact software environment.
Experiment Setup Yes In the adaptive poisoning stage, we set the poisoning threshold tp = 0.15, the number of OUT models m = 8 and the number of maximum poisoning iterations kmax = 6. In the membership neighborhood stage, we set the neighborhood threshold tnb = 0.75, and the size of the neighborhood |S(x,y) nb |= 64 samples.