Replicable Uniformity Testing

Authors: Sihan Liu, Christopher Ye

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we revisit uniformity testing under the framework of algorithmic replicability [STOC 22], requiring the algorithm to be replicable under arbitrary distributions. While replicability typically incurs a ρ 2 factor overhead in sample complexity, we obtain a replicable uniformity tester using only O( nε 2ρ 1) samples. To our knowledge, this is the first replicable learning algorithm with (nearly) linear dependence on ρ. Lastly, we consider a class of symmetric" algorithms [FOCS 00] whose outputs are invariant under relabeling of the domain [n], which includes all existing uniformity testers (including ours). For this natural class of algorithms, we prove a nearly matching sample complexity lower bound for replicable uniformity testing.
Researcher Affiliation Academia Sihan Liu University of California San Diego La Jolla, CA sil046@ucsd.edu Christopher Ye University of California San Diego La Jolla, CA czye@ucsd.edu
Pseudocode Yes Algorithm 1 RUNIFORMITYTESTER(p, ε, ρ) Input :Sample access to distribution p on domain [n] Parameters :ε tolerance, ρ replicability Output :ACCEPT if p Un is uniform, REJECT if d T V (p, Un) ε
Open Source Code No The paper is theoretical and presents algorithms and proofs. It does not mention providing open-source code for its methodology. The NeurIPS checklist also indicates 'NA' for questions related to code and data availability, confirming no code is provided.
Open Datasets No The paper is theoretical and does not conduct experiments with datasets, thus no information about public datasets or their access is provided.
Dataset Splits No The paper is theoretical and does not report on experimental validation, thus no dataset split information for training, validation, or testing is provided.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require hardware specifications. Therefore, no hardware details are mentioned.
Software Dependencies No The paper is theoretical and does not implement or describe any software requiring specific dependencies or version numbers.
Experiment Setup No The paper is purely theoretical and does not include any experimental setup details such as hyperparameters or training configurations.