Private estimation algorithms for stochastic block models and mixture models
Authors: Hongjie Chen, Vincent Cohen-Addad, Tommaso d’Orsi, Alessandro Epasto, Jacob Imola, David Steurer, Stefan Tiegel
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We introduce general tools for designing efficient private estimation algorithms, in the high-dimensional settings, whose statistical guarantees almost match those of the best known non-private algorithms. |
| Researcher Affiliation | Collaboration | Hongjie Chen ETH Zürich Vincent Cohen-Addad Google Research Tommaso d Orsi Bocconi Alessandro Epasto Google Research Jacob Imola UC San Diego David Steurer ETH Zürich Stefan Tiegel ETH Zürich |
| Pseudocode | Yes | Algorithm C.4 (Private weak recovery for SBM) Algorithm C.12 (Private exact recovery for SBM) Algorithm C.13 (Private majority voting) Algorithm C.19 (Inefficient algorithm for SBM) Algorithm D.5 (Private algorithm for learning mixtures of Gaussians) |
| Open Source Code | No | The paper does not contain any statements about making its source code publicly available or providing links to a code repository. |
| Open Datasets | No | The paper describes theoretical models (Stochastic Block Models, Gaussian Mixture Models) from which data is 'sampled'. It does not use or provide access to any pre-existing public datasets for training or evaluation. |
| Dataset Splits | No | The paper defines theoretical models and algorithms. It does not describe experiments performed on specific datasets with training, validation, or test splits. |
| Hardware Specification | No | The paper does not mention any specific hardware used for computational work, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not list specific software dependencies or their version numbers required to implement or run the described algorithms. |
| Experiment Setup | No | The paper describes theoretical algorithms and their properties (e.g., sample complexity, time complexity, privacy parameters). It does not detail an experimental setup with specific hyperparameters, training configurations, or system-level settings for empirical evaluation. |