PAPRIKA: Private Online False Discovery Rate Control

Authors: Wanrong Zhang, Gautam Kamath, Rachel Cummings

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide experimental results to demonstrate the efficacy of our algorithms in a variety of data environments.In Section 4, we provide a thorough empirical investigation of PAPRIKA, with additional empirical results in Appendix C.
Researcher Affiliation Academia 1H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, USA 2Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada 3Department of Industrial Engineering and Operations Research, Columbia University, New York, NY, USA.
Pseudocode Yes Our algorithm, Private Alpha-investing P-value Rejecting Iterative sparse ve Ktor Algorithm (PAPRIKA, Algorithm 1), is presented in Section 3.Algorithm 1 PAPRIKA(α, λ, W0, γ, c, ε, δ, s)
Open Source Code Yes Code for PAPRIKA and our experiments is available at https://github.com/wanrongz/PAPRIKA.
Open Datasets No The paper describes generating synthetic data based on Bernoulli and truncated exponential distributions for its experiments, rather than using a publicly available or open dataset for which access information (link, DOI, or formal citation) is provided. For example, 'We assume that the database D contains n individuals with k independent features.'
Dataset Splits No The paper describes generating data for its experiments (e.g., 'We assume that the database D contains n individuals with k independent features.'), but it does not specify any explicit train/validation/test splits, sample counts for splits, or cross-validation setup for reproducing the data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as programming languages, libraries, or solvers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1').
Experiment Setup Yes In our experiments, we set the target FDR level α + δt = 0.2, and thus our privacy parameter δ is set to be bounded by 0.2/800 = 2.5 10 4. The maximum number of rejections c = 40. All the results are averaged over 100 runs.