Privately Learning Mixtures of Axis-Aligned Gaussians

Authors: Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that e O(k2d log3/2(1/δ)/α2ε) samples are sufficient to learn a mixture of k axis-aligned Gaussians in Rd to within total variation distance α while satisfying (ε, δ)-differential privacy. To prove our results, we design a new technique for privately learning mixture distributions. If you ran experiments... [N/A]
Researcher Affiliation Academia Ishaq Aden-Ali Department of Computing and Software Mc Master University adenali@mcmaster.ca Hassan Ashtiani Department of Computing and Software Mc Master University zokaeiam@mcmaster.ca Christopher Liaw Department of Computer Science University of Toronto cvliaw@cs.toronto.edu
Pseudocode Yes Algorithm 1: Univariate-Mean-Decoder(β, γ, ε, δ, eσ, D).
Open Source Code No If you are including theoretical results... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
Open Datasets No This is a theoretical paper that focuses on mathematical proofs and algorithms, not empirical evaluation on datasets. The ethics statement indicates 'N/A' for experiments, implying no specific dataset was used or made available.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with data; therefore, there are no training, validation, or test splits mentioned.
Hardware Specification No This paper is theoretical and does not conduct experiments, therefore, no hardware specifications are mentioned.
Software Dependencies No This paper is theoretical and does not describe software implementation or dependencies with version numbers.
Experiment Setup No This paper is theoretical and does not describe an experimental setup with hyperparameters or training settings.