Efficient Private ERM for Smooth Objectives

Authors: Jiaqi Zhang, Kai Zheng, Wenlong Mou, Liwei Wang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that our algorithm consistently outperforms existing method in both utility and running time. To show the effectiveness of our algorithm in real world data, we experimentally compare our algorithm with Bassily et al. [Bassily et al., 2014] for convex and strongly convex loss function.
Researcher Affiliation Academia Jiaqi Zhang , Kai Zheng , Wenlong Mou , Liwei Wang Key Laboratory of Machine Perception, MOE, School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China
Pseudocode Yes Algorithm 1 Output Perturbation Full Gradient Descent and Algorithm 2 Random Round Private Stochastic Gradient Descent are present in the paper.
Open Source Code No The paper does not include an unambiguous statement or link indicating that the source code for the methodology described is publicly available.
Open Datasets Yes we consider (regularized) logistic regression on 3 UCI [Lichman, 2013] binary classification datasets and (regularized) Huber regression on 2 UCI regression datasets (see Table 2 for more details). Table 2 lists the specific datasets: BANK, ADULT, Credit Card, WINE, BIKE.
Dataset Splits No The paper mentions using datasets but does not provide specific details about training, validation, or test splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper does not provide specific software names with version numbers or specific ancillary software details needed to replicate the experiment.
Experiment Setup Yes All parameters are chosen as stated in theorems in both papers, except that we use a mini-batch version of SGD in [Bassily et al., 2014] with batch size m = 50, since their algorithm in its original version requires prohibitive n2 time of iterations for real data, which is too slow to run. We evaluate the minimization error EF(wpriv, S) F( ˆw, S) and running time of these algorithms under different ε = {0.1, 0.5, 1, 2} and δ = 0.001. The experimental results are averaged over 100 independent rounds.