Training Private Models That Know What They Don’t Know
Authors: Stephan Rabanser, Anvith Thudi, Abhradeep Guha Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now present a detailed experimental evaluation of the interplay between differential privacy and selective classification. All of our experiments are implemented using Py Torch [Paszke et al., 2019] and Opacus [Yousefpour et al., 2021]. |
| Researcher Affiliation | Collaboration | 1University of Toronto, 2Vector Institute, 3Google Deep Mind |
| Pseudocode | Yes | Algorithm 1: SCTD [Rabanser et al., 2022] |
| Open Source Code | Yes | We publish our full code-base at the following URL: https://github.com/cleverhans-lab/selective-classification. |
| Open Datasets | Yes | Datasets: Fashion MNIST [Xiao et al., 2017], CIFAR-10 [Krizhevsky et al., 2009], SVHN [Netzer et al., 2011], GTSRB [Houben et al., 2013]. |
| Dataset Splits | No | The paper mentions evaluating on a 'test set' but does not provide specific train/validation/test split percentages, sample counts, or explicit splitting methodology needed for reproduction. |
| Hardware Specification | No | The paper thanks 'Relu Patrascu at the University of Toronto for the provided compute infrastructure needed to perform the experimentation outlined in this work' but does not specify any particular hardware models (e.g., GPU/CPU types, RAM). |
| Software Dependencies | No | The paper states, 'All of our experiments are implemented using Py Torch [Paszke et al., 2019] and Opacus [Yousefpour et al., 2021]', but does not specify version numbers for these software components. |
| Experiment Setup | Yes | All models are trained for 200 epochs using a mini-batch size of 128. For all DP training runs, we set the clipping norm to c = 10 and choose the noise multiplier adaptively to reach the overall privacy budget at the end of training. |