Private Estimation with Public Data

Authors: Alex Bie, Gautam Kamath, Vikrant Singhal

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Appendix D we present some proof-of-concept numerical simulations demonstrating the effectivenss of public data for private estimation.
Researcher Affiliation Academia Alex Bie University of Waterloo yabie@uwaterloo.ca Gautam Kamath University of Waterloo g@csail.mit.edu Vikrant Singhal University of Waterloo vikrant.singhal@uwaterloo.ca
Pseudocode Yes Algorithm 1: Public Data Preconditioner Pub Preconditionerβ( e X)
Open Source Code Yes No license in the repository (https://github.com/twistedcubic/coin-press), however we received permission from the authors to use their code.
Open Datasets No The paper refers to drawing samples from Gaussian distributions and Gaussian mixtures. It does not use or provide access to named public datasets or benchmarks with explicit citations or URLs.
Dataset Splits No The paper does not explicitly mention training, validation, or test dataset splits in the context of machine learning experiments. It discusses public and private data samples but not standard ML splits.
Hardware Specification No The paper does not provide specific details about the hardware used for its numerical simulations or experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup No The paper states in Appendix D that it presents "proof-of-concept numerical simulations" but does not provide specific experimental setup details such as hyperparameters, learning rates, or batch sizes in the main text.