Private Testing of Distributions via Sample Permutations
Authors: Maryam Aliakbarpour, Ilias Diakonikolas, Daniel Kane, Ronitt Rubinfeld
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we use the framework of property testing to design algorithms to test the properties of the distribution that the data is drawn from with respect to differential privacy. |
| Researcher Affiliation | Academia | Maryam Aliakbarpour CSAIL, MIT maryama@mit.edu Ilias Diakonikolas University of Wisconsin, Madison ilias.diakonikolas@gmail.com Daniel Kane University of California, San Diego dakane@ucsd.edu Ronitt Rubinfeld CSAIL, MIT, TAU ronitt@csail.mit.edu |
| Pseudocode | Yes | Algorithm 1 A private procedure for property testing |
| Open Source Code | No | The paper is theoretical and focuses on algorithm design and proofs; it does not mention releasing open-source code for the described methodology or provide any links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on named datasets. It refers to drawing 'samples' for theoretical analysis but does not mention specific, publicly available datasets or provide any access information (links, citations). |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with dataset splits. No information on training, validation, or test splits is provided. |
| Hardware Specification | No | The paper is purely theoretical and focuses on algorithm design and analysis, without conducting any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is purely theoretical and does not describe any specific software dependencies with version numbers that would be required for implementation or experimentation. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters or system-level training settings. The parameters discussed (n, epsilon, s, k) are theoretical inputs to the algorithm, not experimental configuration details. |