Learning-augmented private algorithms for multiple quantile release
Authors: Mikhail Khodak, Kareem Amin, Travis Dick, Sergei Vassilvitskii
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conclude with experiments on challenging tasks demonstrating that learning predictions across one or more instances can lead to large error reductions while preserving privacy. |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University; work done in part as an intern at Google Research New York. 2Google Research New York. |
| Pseudocode | Yes | Algorithm 2: Approximate Quantiles with predictions |
| Open Source Code | Yes | Code to reproduce our results is available at https://github.com/mkhodak/private-quantiles. |
| Open Datasets | Yes | We evaluate this approach... on Adult (Kohavi, 1996) and Goodreads (Wan & Mc Auley, 2018)... |
| Dataset Splits | Yes | Adult tests the D D1 case, with its train set the public dataset and a hundred samples from test as private. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as exact GPU or CPU models, memory configurations, or specific cloud computing instance types. |
| Software Dependencies | No | The paper mentions various software components and packages used, such as COCOB (with a GitHub link), textstat (with a GitHub link), NLTK, and an adaptation of DP-FTRL from Google Research. However, it does not provide specific version numbers for any of these dependencies or the general programming environment (e.g., Python version, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | We use the following reasonable guesses for locations ν, scales σ, and quantile ranges ra, bs for these distributions: age: ν 40, σ 5, a 10, b 120; hours: ν 40, σ 2, a 0, b 168; rating: ν 2.5, σ 0.5, a 0, b 5; page count: ν 200, σ 25, a 0, b 1000 |