Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Necessary and Sufficient Stability Notion for Adaptive Generalization
Authors: Moshe Shenfeld, Katrina Ligett
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We introduce a new notion of the stability of computations, which holds under postprocessing and adaptive composition. We show that the notion is both necessary and sufficient to ensure generalization in the face of adaptivity, for any computations that respond to bounded-sensitivity linear queries while providing accuracy with respect to the data sample set. This means (up to a small caveat)1 that our stability definition is equivalent to generalization, assuming sample accuracy, for bounded linear queries. Linear queries form the basis for many learning algorithms, such as those that rely on gradients or on the estimation of the average loss of a hypothesis. |
| Researcher Affiliation | Academia | Katrina Ligett School of Computer Science & Engineering Hebrew University of Jerusalem Jerusalem 91904, Israel EMAIL Moshe Shenfeld School of Computer Science & Engineering Hebrew University of Jerusalem Jerusalem 91904, Israel EMAIL |
| Pseudocode | No | The paper provides a description of an 'Adaptive Mechanism Adp M' with inputs and outputs, but it is presented as a functional description rather than a formal pseudocode block with numbered or structured steps. |
| Open Source Code | No | The paper does not provide any statements about releasing source code or links to a code repository. |
| Open Datasets | No | The paper is theoretical and does not involve empirical studies with datasets. Therefore, it does not mention public datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments, thus no training, validation, or test dataset splits are specified. |
| Hardware Specification | No | The paper is purely theoretical and does not describe any computational experiments that would require specific hardware for execution. |
| Software Dependencies | No | The paper focuses on theoretical concepts and does not specify any software dependencies with version numbers for reproducing empirical work. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or specific training configurations. |