On the Consistency of Feature Selection With Lasso for Non-linear Targets
Authors: Yue Zhang, Weihong Guo, Soumya Ray
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also carry out numerical studies to empirically verify the theoretical results and explore the necessity of the conditions under which the proof holds. 4. Numerical Experiments In this section, we present some numerical studies exploring the theoretical results described above. |
| Researcher Affiliation | Academia | Yue Zhang YUE.ZHANG13@CASE.EDU Soumya Ray SRAY@CASE.EDU Weihong Guo WEIHONG.GUO@CASE.EDU Department of Mathematics, Applied Mathematics and Statistics Department of Electrical Engineering and Computer Science Case Western Reserve University, Cleveland, OH 44106, USA |
| Pseudocode | No | The paper describes methods and models but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper includes footnotes (e.g., 'Further details can be found from https://filer.case.edu/wxg49/') but does not contain an unambiguous statement that the source code for the methodology described in the paper is released or available, nor does it provide a direct link to a code repository. |
| Open Datasets | No | The paper states, 'We generate data using several nonlinear targets...' and describes the data generation process, implying synthetic data. It does not provide access information (link, DOI, formal citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper mentions using 'training samples' and a 'training sample has 2000 examples,' but it does not specify explicit training/validation/test dataset splits, percentages, or methodology for partitioning the data. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions that 'The optimization problem is solved using the ADMM algorithm (Boyd, 2010),' but it does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers with their versions). |
| Experiment Setup | Yes | The regularization parameter λn is set to 0.8 during optimization. The noise ϵ N(0, 0.04I) and w R100 with the first 10 entries non-zeros. To overcome the difficulty of not knowing the constants in theorem 3.4, we uniformly sample λ from n0.6 to n1.2. |