Deep Learning with Label Differential Privacy

Authors: Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement the empirical results with theoretical analysis showing that Label DP is provably easier than protecting both the inputs and labels. We present a novel multi-stage algorithm (LP-MST) for training deep neural networks with Label DP that builds on top of RRWith Prior (see Section 3 and Algorithm 3), and we benchmark its empirical performance (Section 5) on multiple datasets, domains, and architectures, including the following.
Researcher Affiliation Collaboration Badih Ghazi Google Research badihghazi@google.com Noah Golowich EECS, MIT nzg@mit.edu Ravi Kumar Google Research ravi.k53@gmail.com Pasin Manurangsi Google Research pasin@google.com Chiyuan Zhang Google Research chiyuan@google.com
Pseudocode Yes Algorithm 1 RRTop-k; Algorithm 2 RRWith Prior; Algorithm 3 Multi-Stage Training (LP-MST)
Open Source Code No The paper does not provide concrete access to source code, nor does it explicitly state that the source code for its methodology is being released.
Open Datasets Yes We evaluate RRWith Prior on standard benchmark datasets that have been widely used in previous works on private machine learning. Specifically, CIFAR-10 [60] is a 10-class image classification benchmark dataset. In Table 2 we also show results on CIFAR-100, which is a more challenging variant with 10 more classes. In addition, we also evaluate on Movie Lens-1M [49], which contains 1 million anonymous ratings of approximately 3, 900 movies, made by 6,040 Movie Lens users.
Dataset Splits Yes Following [15], we randomly split the data into 80% train and 20% test
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper states, 'Please see the Supplementary Material for full details on the datasets and the experimental setup.' However, it does not contain specific experimental setup details (concrete hyperparameter values, training configurations, or system-level settings) in the main text.