Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Fantastic Generalization Measures are Nowhere to be Found

Authors: Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger

ICLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove mathematically that no such bound can be uniformly tight in the overparameterized setting; (2) bounds that may in addition also depend on the learning algorithm (e.g., stability bounds). For these bounds, we show a trade-off between the algorithm s performance and the bound s tightness.
Researcher Affiliation Academia Michael Gastpar EPFL EMAIL Ido Nachum University of Haifa EMAIL Jonathan Shafer MIT EMAIL Thomas Weinberger EPFL EMAIL
Pseudocode No The paper contains no pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper mentions 'MNIST' and 'CIFAR' as examples of natural image datasets but does not use them for experiments or provide access information.
Dataset Splits No The paper does not describe any experiments and therefore does not specify dataset splits for validation.
Hardware Specification No The paper does not describe the hardware used for any experiments as it is a theoretical work.
Software Dependencies No The paper does not list specific software components with version numbers as it is a theoretical work.
Experiment Setup No The paper does not describe any experimental setup, hyperparameters, or training settings as it is a theoretical work.