Differentially Private Image Classification by Learning Priors from Random Processes

Authors: Xinyu Tang, Ashwinee Panda, Vikash Sehwag, Prateek Mittal

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We attain new state-of-the-art accuracy when training from scratch on CIFAR10, CIFAR100, Med MNIST and Image Net for a range of privacy budgets ε [1, 8]. In particular, we improve the previous best reported accuracy on CIFAR10 from 60.6% to 72.3% for ε = 1.
Researcher Affiliation Academia Xinyu Tang Ashwinee Panda Vikash Sehwag Prateek Mittal Princeton University
Pseudocode No The paper describes the three phases of its approach (Phase I, II, III) and shows a pipeline in Figure 1, but it does not include a formal pseudocode block or an algorithm labeled as such.
Open Source Code Yes Our code is available at https://github.com/inspire-group/DP-Rand P.
Open Datasets Yes We evaluate DP-Rand P on CIFAR10/CIFAR100 [41], Derma MNIST in Med MNIST [65, 66] and private linear probing version of DP-Rand P on Image Net [16].
Dataset Splits Yes We follow Hölzl et al. [34] and report the validation accuracy of Derma MNIST in Tab. 3. Here we also report the test accuracy in Tab. 16 and we can see DP-Rand P outperforms the DP-SGD baseline.
Hardware Specification Yes A single run to privately train a WRN-16-4 for CIFAR10 takes around 5.5 hours for 875 steps with 1 A100 GPU in our evaluation.
Software Dependencies No We use the Opacus library [67] for the DP-SGD implementation. [...] The paper mentions 'Opacus library' with a citation to an arXiv preprint, but does not provide a specific version number for Opacus or any other software dependency like PyTorch.
Experiment Setup Yes Hyperparameters. Tab. 13, 14 and 15 summarize the hyperparameters for DP-Rand P on CIFAR10, CIFAR100 and Derma MNIST respectively.