Data Poisoning against Differentially-Private Learners: Attacks and Defenses

Authors: Yuzhe Ma, Xiaojin Zhu, Justin Hsu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We emprically evaluate this protection by designing attack algorithms targeting objective and output perturbation learners, two standard approaches to differentially-private machine learning. Experiments show that our methods are effective when the attacker is allowed to poison sufficiently many training items.
Researcher Affiliation Academia Yuzhe Ma , Xiaojin Zhu and Justin Hsu University of Wisconsin-Madison {yzm234, jerryzhu, justhsu}@cs.wisc.edu
Pseudocode No The paper describes algorithms but does not present them in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes The first real data set is vertebral column from the UCI Machine Learning Repository [Dua and Karra Taniskidou, 2017]. The second data set is red wine quality from UCI.
Dataset Splits No The paper describes the sizes of training sets and evaluation sets (e.g., "The training set contains n = 21 items...", "evaluation set containing m = 21 evenly-spaced items..."), and for real datasets, the total size (e.g., "310 orthopaedic patients", "1598 wine samples") and how evaluation sets are constructed from them, but it does not specify explicit numerical ratios or counts for conventional train/validation/test splits of a single dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, or memory specifications used for running experiments.
Software Dependencies No The paper mentions machine learning models and optimization methods (e.g., logistic regression, ridge regression, stochastic gradient descent) but does not provide specific software dependencies with version numbers.
Experiment Setup Yes Throughout, we use α = 10 4 for deep selection and fix a constant step size η = 1 for (stochastic) gradient descent. After each iteration of SGD, we project poisoned items to ensure feature norms are at most 1 and labels are in [ 1, 1]. The victim is an objective-perturbed learner for ϵ-differentially private logistic regression, with ϵ = 0.1 and regularizer λ = 10.