On the Computational Landscape of Replicable Learning

Authors: Alkis Kalavasis, Amin Karbasi, Grigoris Velegkas, Felix Zhou

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study computational aspects of algorithmic replicability... Our first result shows that there is a concept class that is efficiently replicably PAC learnable, but, under standard cryptographic assumptions, no efficient online learner exists for this class. Subsequently, we design an efficient replicable learner for PAC learning parities when the marginal distribution is far from uniform, making progress on a question posed by Impagliazzo et al. [2022]. To obtain this result, we design a replicable lifting framework... Finally, we show that any pure DP learner can be transformed to a replicable one...
Researcher Affiliation Academia Alkis Kalavasis Yale University alvertos.kalavasis@yale.edu Amin Karbasi Yale University amin.karbasi@yale.edu Grigoris Velegkas Yale University grigoris.velegkas@yale.edu Felix Zhou Yale University felix.zhou@yale.edu
Pseudocode Yes Algorithm B.1 Replicable Rounding
Open Source Code No The paper is theoretical and focuses on algorithmic design and proofs; it does not mention releasing source code for its methodology.
Open Datasets No The paper is theoretical and analyzes algorithms based on abstract 'samples from a distribution' (e.g., 'samples from D', 'samples from U'), rather than using specific named or publicly available datasets.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, thus no training, validation, or test splits are defined.
Hardware Specification No The paper is theoretical and does not involve experiments, thus no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not involve empirical implementations, so no specific software dependencies with version numbers are listed.
Experiment Setup No The paper is theoretical and does not involve experimental setup or empirical training, thus no specific hyperparameters or training configurations are detailed.