Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning

Authors: Lingxiao Wang, Quanquan Gu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present experimental results of our proposed algorithm on both synthetic and real datasets. We compare our algorithm with Two stage [Kifer et al., 2012] and Frank-Wolfe [Talwar et al., 2015] methods.
Researcher Affiliation Academia Lingxiao Wang and Quanquan Gu Department of Computer Science, University of California, Los Angeles {lingxw,qgu}@cs.ucla.edu
Pseudocode Yes Algorithm 1 Differentially Private Iterative Gradient Hard Thresholding (DP-IGHT)
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes In this experiment, we use two real datasets, E2006-TFIDF dataset [Kogan et al., 2009] and RCV1 dataset [Lewis et al., 2004], for the evaluation of sparse linear regression and sparse logistic regression, respectively.
Dataset Splits Yes E2006-TFIDF dataset, which consists of 16087 training examples, 3308 testing examples...
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers needed to replicate the experiment.
Experiment Setup Yes For all the experiments, we choose the variance of the random noise of different methods as suggested by their theoretical guarantees, and select other parameters, such as the step size, iteration number, and thresholding parameter by five-fold cross-validation.