Statistical Indistinguishability of Learning Algorithms

Authors: Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main results are information-theoretic equivalences between TV indistinguishability and existing algorithmic stability notions such as replicability and approximate differential privacy. Then, we provide statistical amplification and boosting algorithms for TV indistinguishable learners.
Researcher Affiliation Collaboration 1Department of Computer Science, NTUA, Athens, Greece 2Department of Computer Science, Yale University, New Haven, United States 3Google Research 4Department of Computer Science, Technion, Haifa, Israel.
Pseudocode Yes Algorithm 1 Replicable Heavy-Hitters; Algorithm 2 Replicable Agnostic Learner for Finite H; Algorithm 3 From Global Stability to Replicability; Algorithm 4 List-Global Stability = TV Indistinguishability; Algorithm 5 From TV Indistinguishability to Differential Privacy; Algorithm 6 Amplification of Indistinguishability Guarantees; Algorithm 7 Boosting of Accuracy Guarantee
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository for the described methodology.
Open Datasets No The paper is theoretical and focuses on mathematical proofs and algorithms, not empirical evaluations with datasets. Therefore, it does not mention specific training datasets or their public availability.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments. Therefore, it does not provide details on training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe empirical experiments. Therefore, it does not list specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe empirical experiments. Therefore, it does not provide specific details about an experimental setup, such as hyperparameters or training settings.