Task-level Differentially Private Meta Learning
Authors: Xinyu Zhou, Raef Bassily
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct several experiments demonstrating the effectiveness of our proposed algorithms. |
| Researcher Affiliation | Academia | Xinyu Zhou Department of Computer Science & Engineering The Ohio State University zhou.3542@buckeyemail.osu.edu Raef Bassily Department of Computer Science & Engineering and TDAI Institute The Ohio State University bassily.1@osu.edu |
| Pseudocode | Yes | Algorithm 1: Ameta-NSGD: meta learning with mini-batch noisy SGD |
| Open Source Code | Yes | Our code is available online at https://github.com/xyzhou055/Meta NSGD |
| Open Datasets | Yes | We consider linear regression task with mean square loss. Each task contains 10 datapoints (xi, yi)10 i=1. In Appendix C, we present additional experiments to evaluate our algorithms on Omniglot [28] few-shot classification tasks |
| Dataset Splits | No | The paper describes the number of data points per task ("Each task contains 10 datapoints") and the overall distribution of tasks, but does not explicitly provide specific train/validation/test splits for the collection of tasks used in the meta-learning experiments. The ethics checklist indicates this information might be present elsewhere, but it is not in the provided main text. |
| Hardware Specification | No | The paper explicitly states 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]'. No specific hardware details are mentioned in the main text. |
| Software Dependencies | No | The paper mentions using 'Tensor Flow privacy[27]' but does not specify its version number or any other software dependencies with their respective versions. |
| Experiment Setup | Yes | We let d = 30 for all experiments. Each task contains 10 datapoints (xi, yi)10 i=1. The privacy parameters ϵ are chosen from {1, 3, 10} and δ is set as 10 5. We set the clipping norm to 2. In the experiment, t, the number of cluster in the underlying distribution, is set to be 3, σ = 0.5 and we choose { h1, h2, h3} to be three orthogonal vectors with different norms. |