Differentially Private Learning with Small Public Data

Authors: Jun Wang, Zhi-Hua Zhou6219-6226

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the performance of PPSGD and compare it to the following baselines:
Researcher Affiliation Academia Jun Wang, Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {wangj, zhouzh}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 PPSGD
Open Source Code Yes Detailed experimental setups, results, and Matlab codes for PPSGD can be found at http://www.lamda.nju.edu.cn/code PPSGD.ashx.
Open Datasets Yes Table 1: Characteristics of real-world datasets. Classification dataset # Sample # Feature % Positive adult-a 32561 123 24.1 ipums-br 38000 52 50.6 ipums-us 39928 57 51.3 magic04 19020 10 64.8 mini-boo-ne 130064 50 28.1 skin 245057 3 20.8 Regression dataset # Sample # Feature Variance cadata 20640 8 0.23 stability 10000 12 0.15
Dataset Splits No The paper mentions 80 percent of samples are randomly selected for training and the rest for testing, but does not explicitly describe a separate validation set split.
Hardware Specification No No specific hardware details (like GPU/CPU models or cloud resources) are mentioned for running the experiments.
Software Dependencies No The paper mentions 'Matlab codes' for PPSGD but does not provide specific version numbers for Matlab or any other software dependencies.
Experiment Setup Yes To sum up, we set φ = 10, α = β = 0.3, ϕ = 100 for hinge loss, ϕ = 5 for square loss, and λ {0.01, 0.1, 1}.