On the Sample Complexity of Privately Learning Axis-Aligned Rectangles
Authors: Menachem Sadigurschi, Uri Stemmer
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We present a novel algorithm that reduces the sample complexity to only e O d (log |X|)1.5 , attaining a dimensionality optimal dependency without requiring the sample complexity to grow with log |X|. The technique used in order to attain this improvement involves the deletion of exposed data-points on the go, in a fashion designed to avoid the cost of the adaptive composition theorems. The core of this technique may be of individual interest, introducing a new method for constructing statistically-efficient private algorithms. |
| Researcher Affiliation | Collaboration | Department of Computer Science, Ben-Gurion University of the Negev. sadigurs@post.bgu.ac.il. Blavatnik School of Computer Science, Tel Aviv University and Google Research. u@uri.co.il. |
| Pseudocode | Yes | Algorithm 1: Rand Margins |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository. |
| Open Datasets | No | The paper is theoretical and does not describe experiments performed on a dataset. It refers to a finite grid Xd as a theoretical domain, not a specific dataset. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments performed on a dataset, thus no dataset split information for training, validation, or testing is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe any experimental hardware specifications or setups. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies with version numbers for experimental replication. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with concrete hyperparameter values, training configurations, or system-level settings. |