Private Everlasting Prediction
Authors: Moni Naor, Kobbi Nissim, Uri Stemmer, Chao Yan
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We then present a generic construction of private everlasting predictors in the PAC model. The sample complexity of the initial training sample in our construction is quadratic (up to polylog factors) in the VC dimension of the concept class. Our construction allows prediction for all concept classes with finite VC dimension, and in particular threshold functions over infinite domains, for which (traditional) private learning is known to be impossible. Our generic construction Algorithm Generic BBL transforms a (non-private) learner for a concept class C into a private everlasting predictor for C. The theorem below follows from Theorem 6.2 and Claim 6.3 which are proved in Appendix E. |
| Researcher Affiliation | Collaboration | Department of Computer Science and Applied Math, Weizmann Institute of Science. moni.naor@weizmann.ac.il. Incumbent of the Judith Kleeman Professorial Chair. Research supported in part by grants from the Israel Science Foundation (no.2686/20), by the Simons Foundation Collaboration on the Theory of Algorithmic Fairness and by the Israeli Council for Higher Education (CHE) via the Weizmann Data Science Research Center. Department of Computer Science, Georgetown University. kobbi.nissim@georgetown.edu. Research supported in part by by NSF grant No. CNS-2001041 and a gift to Georgetown University. Blavatnik School of Computer Science, Tel Aviv University, and Google Research. u@uri.co.il. Partially supported by the Israel Science Foundation (grant 1871/19) and by Len Blavatnik and the Blavatnik Family foundation. Department of Computer Science, Georgetown University. cy399@georgetown.edu. Research supported in part by by a gift to Georgetown University. |
| Pseudocode | Yes | Algorithm Generic BBL |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper describes theoretical models and algorithms but does not use or provide access information for any specific public datasets for empirical training. |
| Dataset Splits | No | The paper describes theoretical concepts and algorithms, but it does not discuss specific dataset splits (training, validation, or test) as it does not report on empirical experiments. |
| Hardware Specification | No | The paper focuses on theoretical contributions and does not describe any specific hardware used for running experiments. |
| Software Dependencies | No | The paper focuses on theoretical contributions and does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper focuses on theoretical contributions and does not provide details about an experimental setup, hyperparameters, or training configurations. |