Smoothly Bounding User Contributions in Differential Privacy
Authors: Alessandro Epasto, Mohammad Mahdian, Jieming Mao, Vahab Mirrokni, Lijie Ren
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conclude with experimental evaluations which validate our theoretical results. |
| Researcher Affiliation | Industry | Alessandro Epasto Google Research 111 8th Ave, New York, NY, 10011 aepasto@google.com Mohammad Mahdian Google Research 111 8th Ave, New York, NY, 10011 mahdian@google.com Jieming Mao Google Research 111 8th Ave, New York, NY, 10011 maojm@google.com Vahab Mirrokni Google Research 111 8th Ave, New York, NY, 10011 mirrokni@google.com Lijie Ren Google Research 111 8th Ave, New York, NY, 10011 renlijie@google.com |
| Pseudocode | Yes | Algorithm 1 Weighted Averaging WAc |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | We evaluated all methods on two publicly-available datasets containing real-world data as well as synthetic datasets... drugs [GKMZ18] (n = 3107, d = 8, m = 502 users...) and news [MT18] (n = 3452, d = 10, m = 297 users...). |
| Dataset Splits | No | The paper describes experimental setup and evaluations (e.g., 'Experiments are repeated 10 times'), but does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU models, CPU types, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'sklearn s make_regression' for synthetic data generation, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Experiments are repeated 10 times and we report mean of each metric computed. For quality we use the average squared error for the prediction. We evaluate our general setting algorithm in Section 5 using ε = 1, 2, 3 values. |