A Universal Analysis of Large-Scale Regularized Least Squares Solutions

Authors: Ashkan Panahi, Babak Hassibi

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 1a depicts the average value w 2 2/n over 50 independent realizations of the LASSO, including independent Gaussian sensing matrices with γ = 0.5, sparse true vectors with κ = 0.2 and Gaussian noise realizations with σ2 = 0.1. We consider two different problem sizes n = 200, 500.
Researcher Affiliation Academia Ashkan Panahi Department of Electrical and Computer Engineering North Carolina State University Raleigh, NC 27606 apanahi@ncsu.edu Babak Hassibi Department of Electrical Engineering California Institute of Technology Pasadena, CA 91125 hassibi@caltech.edu
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper describes generating synthetic data based on statistical distributions (e.g., 'independent Gaussian sensing matrices', 'sparse true vectors', 'Gaussian noise realizations', 'centered Bernoulli matrix', 'Student s t-distribution', 'asymmetric Bernoulli matrix') rather than using or providing access to a named, publicly available dataset.
Dataset Splits No The paper describes performing simulations with '50 independent realizations' and '1000 independent realizations' for different problem sizes (n=200, 500) for statistical averaging, but it does not specify explicit training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running the simulations.
Software Dependencies No The paper does not provide specific software dependencies or version numbers.
Experiment Setup Yes Figure 1a depicts the average value w 2 2/n over 50 independent realizations of the LASSO, including independent Gaussian sensing matrices with γ = 0.5, sparse true vectors with κ = 0.2 and Gaussian noise realizations with σ2 = 0.1. We consider two different problem sizes n = 200, 500.