Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms
Authors: Hilal Asi, John C. Duchi
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study and provide instance-optimal algorithms in differential privacy by extending and approximating the inverse sensitivity mechanism. We provide two approximation frameworks... Finally, we use our approximation framework to develop private mechanisms for unbounded-range mean estimation, principal component analysis, and linear regression. The utility improvements in these examples demonstrate the advantages of our mechanisms over standard frameworks and the importance of these notions of instance-optimality. |
| Researcher Affiliation | Academia | Hilal Asi Stanford University EMAIL John C. Duchi Stanford University EMAIL |
| Pseudocode | Yes | Algorithm 1: Sampling from approximate inverse sensitivity; Algorithm 2: Private PCA using approximate inverse sensitivity; Algorithm 3: Gradient mechanism for linear regression (and Algorithm 4 in Appendix D.1). |
| Open Source Code | No | The paper does not contain any explicit statement about making source code available or provide any links to a code repository. |
| Open Datasets | No | The paper discusses applications to problems like mean estimation ("Given xi iid P with unbounded range") and linear regression ("we have data points (xi, yi) Rd R"), but it does not specify or provide access information for any public or open datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., exact percentages, sample counts, or a detailed splitting methodology for training, validation, and test sets). |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., exact GPU/CPU models or memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments. |
| Experiment Setup | No | The paper primarily focuses on theoretical analysis and algorithm design rather than empirical implementation details. It does not provide specific hyperparameter values, training configurations, or system-level settings for its applications. |