On Learning and Refutation in Noninteractive Local Differential Privacy
Authors: Alexander Edmonds, Aleksandar Nikolov, Toniann Pitassi
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main result is a complete characterization of the sample complexity of agnostic PAC learning for non-interactive LDP protocols. We show that the optimal sample complexity for any concept class is captured by the approximate γ2 norm of a natural matrix associated with the class. Combined with previous work, this gives an equivalence between agnostic learning and refutation in the agnostic setting. Also, the self-assessment states: 2. If you are including theoretical results... (b) Did you include complete proofs of all theoretical results? [Yes] |
| Researcher Affiliation | Academia | Alexander Edmonds University of Toronto alex.edmonds@utoronto.ca, Aleksandar Nikolov University of Toronto anikolov@cs.toronto.edu, Toniann Pitassi Columbia University & University of Toronto toni@cs.toronto.edu |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. Its content is primarily theoretical with proofs and mathematical derivations. The self-assessment also states: 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] |
| Open Source Code | No | The paper is theoretical and does not describe any software implementation. The self-assessment states: 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] |
| Open Datasets | No | The paper is theoretical and does not involve experiments with datasets. The self-assessment states: 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]. It also states: 4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [N/A]. |
| Dataset Splits | No | The paper is theoretical and does not involve experiments with dataset splits. The self-assessment states: 3. If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require hardware specifications. The self-assessment states: 3. If you ran experiments... (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] |
| Software Dependencies | No | The paper is theoretical and does not describe any computational experiments that would require software dependencies with version numbers. No software details are provided. |
| Experiment Setup | No | The paper is theoretical and does not describe any experiments with specific setup details like hyperparameters or training configurations. The self-assessment states: 3. If you ran experiments... (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] |