Early Stopping for Nonparametric Testing

Authors: Meimei Liu, Guang Cheng

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we compare our testing method with an oracle version of stopping rule that uses knowledge of f , as well as the test based on the penalized regularization. We further conduct the simulation studies to verify our theoretical results. Data were generated from the regression model (2.1) with f(xi) = c cos(4 xi), where xi iid Unif[0, 1] and c = 0, 1 respectively. c = 0 is used for examining the size of the test, and c = 1 is used for examining the power of the test. The sample size n is ranged from 100 to 1000. We use Gaussian kernel (i.e., p = 2 in EDK) to fit the data. Significance level was chosen as 0.05. Both size and power were calculated as the proportions of rejections based on 500 independent replications.
Researcher Affiliation Academia Meimei Liu Department of Statistical Science Duke University Durham, NC 27705 meimei.liu@duke.edu Guang Cheng Department of Statistics Purdue University West Lafayette, IN 47907 chengg@purdue.edu
Pseudocode No The paper describes algorithms and procedures using mathematical notation and descriptive text, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets No Data were generated from the regression model (2.1) with f(xi) = c cos(4 xi), where xi iid Unif[0, 1] and c = 0, 1 respectively. Data were generated via yi = 0.5x2 i +0.5 sin(4 xi)+ i with sample size n = 200, {xi}n i=1 Unif[0, 1], i N(0, 1). The paper generates synthetic data and does not provide concrete access information for a publicly available or open dataset.
Dataset Splits No The paper describes data generation and uses 'sample size n' but does not specify explicit training, validation, or test dataset splits or methods to reproduce them for its primary experiments. It mentions '10 fold cross validation' for a comparison method, but this is not a general train/validation/test split for their own experiments.
Hardware Specification No The paper discusses computational efficiency and presents computational time results (Figure 2c), but it does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory, cloud instances).
Software Dependencies No The paper does not list any specific software dependencies or their version numbers that would be necessary to replicate the experiments.
Experiment Setup Yes For the ES, we use bootstrap method to approximate the bias with B = 10 and the step size = 1. For the penalization-based test, we use 10 fold cross validation (10-fold CV) to select the penalty parameter. For the oracle ES, we follow the stopping rule in (5.1) with constant step size = 1. set the constant step size = 1