Robust Hypothesis Test for Nonlinear Effect with Gaussian Processes

Authors: Jeremiah Liu, Brent Coull

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the finite-sample performance of our test under different data-generating functions and estimation strategies for the null model. Our results reveal interesting connections between notions in machine learning (model underfit/overfit) and those in statistical inference (i.e. Type I error/power of hypothesis test), and also highlight unexpected consequences of common model estimating strategies (e.g. estimating kernel hyperparameters using maximum likelihood estimation) on model inference.
Researcher Affiliation Academia Jeremiah Zhe Liu, Brent Coull Department of Biostatistics Harvard University Cambridge, MA 02138 {zhl112@mail, bcoull@hsph}.harvard.edu
Pseudocode Yes Algorithm 1 Variance Component Test for h H0
Open Source Code No The paper does not include any statements about releasing its source code or provide a link to a code repository.
Open Datasets No We generate two groups of input features (xi,1, xi,2) Rp1 Rp2 independently from standard Gaussian distribution, representing normalized data representing subject s level of exposure to p1 environmental pollutants and the levels of a subject s intake of p2 nutrients during the study. This indicates simulated data, not a publicly available dataset.
Dataset Splits Yes LOOCV(λ|kd) = (I diag(Ad,λ)) 1(y ˆhd,λ) where Ad,λ = Kd(Kd + λI) 1. We denote estimate the final LOOCV error for dth kernel ˆϵd = LOOCV ˆλd|kd . Using the estimated LOOCV errors {ˆϵd}D d=1, estimate the ensemble weights u = {ud}D d=1 such that it minimizes the overall LOOCV error: ˆu = argmin u ||d=1 udˆϵd||2 where = {u|u 0, ||u||2 2 = 1}
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes Throughout the simulation scenarios, we keep n = 100, and p1 = p2 = 5. We generate the outcome yi as: yi = h1(xi,1) + h2(xi,2) + δ h12(xi,1, xi,2) + ϵi (12) where h1, h2, h12 are sampled from RKHSs H1, H2 and H1 H2, generated using a ground-truth kernel ktrue. We standardized all sampled functions to have unit norm, so that δ represents the strength of interaction relative to the main effect. For each simulation scenario, we first generated data using δ and ktrue as above, then selected a kmodel to estimate the null model and obtain p-value using Algorithm 1. We repeated each scenario 1000 times.