Distribution Learnability and Robustness
Authors: Shai Ben-David, Alex Bie, Gautam Kamath, Tosca Lechner
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We examine the relationship between learnability and robust (or agnostic) learnability for the problem of distribution learning. We show that learnability of a distribution class implies robust learnability with only additive corruption, but not if there may be subtractive corruption. Thus, contrary to other learning settings (e.g., PAC learning of function classes), realizable learnability does not imply agnostic learnability. We also explore related implications in the context of compression schemes and differentially private learnability. |
| Researcher Affiliation | Academia | Shai Ben-David University of Waterloo, Vector Institute shai@uwaterloo.ca Alex Bie University of Waterloo yabie@uwaterloo.ca Gautam Kamath University of Waterloo, Vector Institute g@csail.mit.edu Tosca Lechner University of Waterloo tlechner@uwateralo.ca |
| Pseudocode | No | The paper describes algorithms in prose, such as 'Our algorithm enumerates over all subsets of the dataset...', but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not use or provide access to any specific datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental validation with data splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any computational experiments that would require hardware specifications. |
| Software Dependencies | No | The paper is theoretical and does not describe any specific software dependencies with version numbers for experimental replication. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations. |