Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Inference In High-dimensional Single-Index Models Under Symmetric Designs

Authors: Hamid Eftekhari, Moulinath Banerjee, Ya'acov Ritov

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we illustrate our approach via carefully designed simulation experiments. Keywords: sparsity, debiased inference, compressed sensing, Hermite polynomials
Researcher Affiliation Academia Hamid Eftekhari EMAIL Moulinath Banerjee EMAIL Ya acov Ritov EMAIL Department of Statistics University of Michigan Ann Arbor, MI 48109, USA
Pseudocode Yes Algorithm 1: Hermite Estimator with Known Σ Algorithm 2: Hermite Estimator with Unknown Σ
Open Source Code Yes Julia code for these simulations is available at https://github.com/ehamid/sim_debiasing.
Open Datasets No The paper defines data generation processes for its simulations (e.g., yi = g( x, τ ) + ε where ε N(0, 1) and x N(0, Σ)) rather than utilizing pre-existing public datasets. Therefore, no concrete access information for open datasets is provided.
Dataset Splits Yes Suppose a sample size of size 2n is given. Then Compute the residuals ri as defined above on the first sub-sample (xi, yi)n i=1. Compute the lasso (Tibshirani, 1996) estimator ˆβ on the second subsample (xi, yi)2n i=n+1. Algorithm 1: ...Set ˆβ := arg minβ n n/2 1 Pn+ n/2 i=n+1 (yi xi, β )2... Algorithm 2: ...Set ˇβ := arg minβ n (2n) 1 Pn i=1(yi xi, β )2... Set ˆβ := arg minβ n (2 n/3 ) 1 Pn+ n/3 i=n+1 (yi xi, β )2...
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments or simulations.
Software Dependencies No The paper mentions 'Julia code for these simulations' and references 'R package glmnet (Friedman et al., 2010)' but does not specify version numbers for Julia, R, or other key software libraries used in the implementation.
Experiment Setup Yes Next we describe our setup for the simulations. We consider the effect of up to the tenth order Hermite expansions on the bias and variance of the debiased estimator. We take the nonlinear link function to be: g(t) = 5 sin(t), and define τ by ιT Σι where ιi = ( 11 i if i 10, 0 else, and Σij = 0.5|i j| for 1 i, j p = 2000. A simple calculation shows that µ = 5/ e in this case, so that β = 5τ/ e. For each one of 1000 Monte Carlo replications, n = 1000 observations were generated from y = g( x, τ ) + 0.1ε where ε N(0, 1) and x N(0, Σ) and β1 was computed as described in Section 3 for Hermite expansions of up to the tenth degree.