Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Optimal Linear Estimation under Unknown Nonlinear Transform

Authors: Xinyang Yi, Zhaoran Wang, Constantine Caramanis, Han Liu

NeurIPS 2015 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now turn to the numerical results that support our theory. For the three models introduced in 2, we apply Algorithm 1 and Algorithm 2 to do parameter estimation in the classic and high dimensional regimes. Our simulations are based on synthetic data. For classic recovery, β is randomly chosen from Sp 1; for sparse recovery, we set β j = s 1/21(j S) for all j [p], where S is a random index subset of [p] with size s. In Figure 1, as predicted by Theorem 3.5, we observe that the same p/n leads to nearly identical estimation error. Figure 2 demonstrates similar results for the predicted rate s log p/n of sparse recovery and thus validates Theorem 3.6.
Researcher Affiliation Academia Xinyang Yi The University of Texas at Austin EMAIL Zhaoran Wang Princeton University EMAIL Constantine Caramanis The University of Texas at Austin EMAIL Han Liu Princeton University EMAIL
Pseudocode Yes Algorithm 1 Low dimensional recovery Algorithm 2 Sparse recovery
Open Source Code No The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets No Our simulations are based on synthetic data.
Dataset Splits No The paper mentions generating synthetic data for simulations but does not specify any dataset splits (e.g., training, validation, test percentages or sample counts) used for reproducibility.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers needed to replicate the experiments.
Experiment Setup Yes Suppose ρ=C φ(f)+(1 µ2 0) p log p/n with a sufficiently large constant C, where φ(f) and µ0 are specified in (3.2) and (3.5). Meanwhile, assume the sparsity parameter bs in Algorithm 2 is set to be bs=C max 1/(κ 1/2 1)2 ,1 s . For n nmin with nmin defined in (3.10), we have... In Figure 1... p = 10 p = 20 p = 40 In Figure 2... p = 100, s = 5 p = 100, s = 10 p = 200, s = 5 p = 200, s = 10