Efficient Private Empirical Risk Minimization for High-dimensional Learning

Authors: Shiva Prasad Kasiviswanathan, Hongxia Jin

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain).
Researcher Affiliation Industry Shiva Prasad Kasiviswanathan KASIVISW@GMAIL.COM Samsung Research America, Mountain View, CA 94043 Hongxia Jin HONGXIA.JIN@SAMSUNG.COM Samsung Research America, Mountain View, CA 94043
Pseudocode Yes Mechanism 1 PROJERM: Input: A random subgaussian matrix Φ Rm d, and a dataset D = (Φx1, y1), . . . , (Φxn, yn) of n datapoints from the domain MΦ = {(Φx, y) : x Rd, x 1, y R, |y| 1} Output: θpriv a differentially private estimate of ˆθ argminθ C 1 n Pn i=1 ℓ( xi, θ ; yi) 1. Let ϑpriv Output of an (ϵ, δ)-differentially private or an ϵ-differentially private ERM algorithm solving the following problem: argminϑ ΦC 1 n i=1 ℓ( Φxi, ϑ ; yi) 2. θpriv argminθ Rd θ C subject to Φθ = ϑpriv (can be solved with any convex programming technique) 3. Return: θpriv
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The paper is theoretical and discusses abstract datasets with 'n datapoints' without referring to any specific, publicly available or open datasets for training.
Dataset Splits No The paper is theoretical and does not describe experimental setups, hence no dataset split information (train/validation/test) is provided.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the hardware used for it.
Software Dependencies No The paper is theoretical and does not describe any specific software implementations or their version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations.