Closed-form Estimators for High-dimensional Generalized Linear Models
Authors: Eunho Yang, Aurelie C. Lozano, Pradeep K. Ravikumar
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We corroborate the surprising statistical and computational performance of our class of estimators via extensive simulations. |
| Researcher Affiliation | Collaboration | Eunho Yang IBM T.J. Watson Research Center eunhyang@us.ibm.com Aurelie C. Lozano IBM T.J. Watson Research Center aclozano@us.ibm.com Pradeep Ravikumar University of Texas at Austin pradeepr@cs.utexas.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that open-source code for the methodology is available. |
| Open Datasets | No | The paper uses simulated data, which it describes how to generate, rather than a publicly available or open dataset for which access information is provided. |
| Dataset Splits | Yes | While our theorem specified an optimal setting of the regularization parameter λn and , this optimal setting depended on unknown model parameters. Thus, as is standard with high-dimensional regularized estimators, we set tuning parameters λn = c log p/n and = c0p log p/n by a holdoutvalidated fashion; finding a parameter that minimizes the 2 error on an independent validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | We compare against standard 1 regularized MLE estimators with iteration bounds of 50, 100, and 500, denoted by 1 MLE1, 1 MLE2 and 1 MLE3 respectively. Noting that our theoretical results were not sensitive to the setting of in M(y), we simply report the results when = 10 4 across all experiments. we set tuning parameters λn = c log p/n and = c0p log p/n by a holdoutvalidated fashion; finding a parameter that minimizes the 2 error on an independent validation set. |