Zipper: Addressing Degeneracy in Algorithm-Agnostic Inference

Authors: Geng Chen, Yinxu Jia, Guanghui Wang, Changliang Zou

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finite-sample experiments demonstrate that our procedure, with a simple choice of the slider, works well across a wide range of settings.
Researcher Affiliation Academia Geng Chen Yinxu Jia Guanghui Wang Changliang Zou NITFID, School of Statistics and Data Science, LPMC, KLMDASR, and LEBPS, Nankai University gengchen.stat@gmail.com, yxjia@mail.nankai.edu.cn, ghwang.nk@gmail.com, zoucl@nankai.edu.cn
Pseudocode Yes Algorithm 1 The algorithm for the proposed Zipper testing procedure
Open Source Code No The paper describes the algorithm and synthetic data generation but does not explicitly state that the source code for the methodology is released or provide a link to a repository.
Open Datasets Yes We apply the Zipper method to the widely used MNIST handwritten digit dataset [32]. We expand the application of our Zipper method to the bodyfat dataset [33],
Dataset Splits No The paper describes a K-fold cross-fitting scheme and within-fold data partitioning for the Zipper device, but does not provide specific, fixed training, validation, and test dataset splits for the entire dataset.
Hardware Specification Yes These experiments are executed on an Intel Xeon Gold 5118 CPU @ 2.30GHz.
Software Dependencies No The paper mentions software like ordinary least-squares regression, LASSO, abess algorithm, Convolutional Neural Network (CNN), and random forest, but does not specify their version numbers.
Experiment Setup Yes The significance level is chosen as α = 5%, and our experiments entail 1, 000 replications. We specify the slider parameter τ = min{τ0, 0.9} with n0 = 50 as suggested in Section 2.5.