Privately Learning Subspaces

Authors: Vikrant Singhal, Thomas Steinke

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present differentially private algorithms that take input data sampled from a low-dimensional linear subspace (possibly with a small amount of error) and output that subspace (or an approximation to it). Theorem 1.1 (Main Result Exact Case). For all n, d, k, ℓ N and ε, δ > 0 satisfying n O ℓ+ log(1/δ) ε , there exists a randomized algorithm M : Rd n Sk d satisfying the following. Algorithm 1: DP Exact Subspace Estimator Algorithm 2: DP Approximate Subspace Estimator
Researcher Affiliation Collaboration Vikrant Singhal Cheriton School of Computer Science University of Waterloo Waterloo, ON N2L 3G1, Canada vikrant.singhal@uwaterloo.ca Thomas Steinke Google Research, Brain Team Mountain View, CA, United States of America subspace@thomas-steinke.net
Pseudocode Yes Algorithm 1: DP Exact Subspace Estimator DPESEε,δ,k,ℓ(X) Algorithm 2: DP Approximate Subspace Estimator DPASEε,δ,α,γ,k(X)
Open Source Code No The paper does not contain any explicit statement about open-sourcing code, nor does it provide a link to a code repository.
Open Datasets No The paper describes theoretical algorithms and their mathematical properties, assuming data is sampled from a distribution (e.g., 'data comes from a Gaussian distribution'). It does not use or mention any specific public or open dataset.
Dataset Splits No The paper focuses on theoretical analysis and does not describe any experiments that would involve training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any software dependencies or specific version numbers for replication.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations.