Regression with Label Differential Privacy
Authors: Badih Ghazi, Pritish Kamath, Ravi Kumar, Ethan Leeman, Pasin Manurangsi, Avinash Varadarajan, Chiyuan Zhang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We carry out a thorough experimental evaluation on several datasets demonstrating the efficacy of our algorithm. |
| Researcher Affiliation | Industry | Email: {badih.ghazi, ravi.k53}@gmail.com, {pritishk, ethanleeman, pasin, avaradar, chiyuan}@google.com |
| Pseudocode | Yes | Algorithm 1 RR-on-BinsΦ ε. [...] Algorithm 2 Compute optimal Φ for RR-on-BinsΦ ε. [...] Algorithm 3 Labels Party s Randomizer Label Randomizerε1,ε2. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | The Criteo Sponsored Search Conversion Log Dataset (Tallis & Yadav, 2018) is publicly available from https://ailab. criteo.com/criteo-sponsored-search-conversion-log-dataset/. [...] The 1940 US Census dataset has been made publicly available for research since 2012 |
| Dataset Splits | No | The paper mentions "80% 20% train test splits" but does not specify a validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like "Tensor Flow Privacy" and "Py Torch Opacus" but does not specify version numbers for these or other relevant libraries/solvers. |
| Experiment Setup | Yes | We use learning rate 0.001 with cosine decay (Loshchilov & Hutter, 2017), batch size 8192, and train for 50 epochs. [...] We use learning rate 0.001 with cosine decay (Loshchilov & Hutter, 2017), batch size 8192, and train for 200 epochs. |