Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

The Bayesian Stability Zoo

Authors: Shay Moran, Hilla Schefler, Jonathan Shafer

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that many definitions of stability found in the learning theory literature are equivalent to one another. We distinguish between two families of definitions of stability: distribution-dependent and distribution-independent Bayesian stability. Within each family, we establish equivalences between various definitions, encompassing approximate differential privacy, pure differential privacy, replicability, global stability, perfect generalization, TV stability, mutual information stability, KL-divergence stability, and Rényi-divergence stability. Along the way, we prove boosting results that enable the amplification of the stability of a learning rule.
Researcher Affiliation Academia Shay Moran Department of Mathematics & Department of Computer Science Technion Israel Institute of Technology; EMAIL Hilla Schefler Department of Mathematics Technion Israel Institute of Technology EMAIL Jonathan Shafer Computer Science Division UC Berkeley EMAIL
Pseudocode Yes Algorithm 1: The stability-boosted learning rule A , which uses A as a subroutine.
Open Source Code No The paper does not provide any statements about open-sourcing code or links to a code repository.
Open Datasets No The paper is theoretical and does not use datasets in an experimental context. It discusses 'training samples' and 'population distributions' in an abstract, theoretical sense related to learning theory, not specific empirical datasets.
Dataset Splits No The paper is theoretical and does not conduct experiments with dataset splits. Therefore, it does not provide information on training/validation splits.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers. It focuses on mathematical concepts and proofs.
Experiment Setup No The paper is theoretical and does not describe empirical experiments. Therefore, it does not include details about an experimental setup, hyperparameters, or training settings.