Feature Adaptation for Sparse Linear Regression

Authors: Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Figure 1 we show that Adapted BP() significantly outperforms standard Basis Pursuit (i.e. Lasso for noiseless data [7]) on a simple example with n = 1000 variables, dℓ= 10 sparse approximate dependencies, and a ground truth regressor with sparsity t = 13. The simulations were done using Python 3.9 and the Gurobi library [17]. Each figure took several minutes to generate using a standard laptop.
Researcher Affiliation Academia Jonathan A. Kelner MIT Frederic Koehler Stanford Raghu Meka UCLA Dhruv Rohatgi MIT
Pseudocode Yes Algorithm 1: Adapted BP for sparse linear regression with few outlier eigenvalues. Algorithm 2: Solve sparse linear regression when covariate eigenspectrum has few outliers
Open Source Code Yes See the file auglasso.py for code and execution instructions. See Appendix I for implementation details.
Open Datasets No In Figure 1 we show that Adapted BP() significantly outperforms standard Basis Pursuit ... on a simple example with n = 1000 variables, dℓ= 10 sparse approximate dependencies, and a ground truth regressor with sparsity t = 13. The covariates X1:1000 are all independent N(0, 1) except for 10 disjoint triplets... The dataset is synthetic and no public access information is provided.
Dataset Splits No The paper mentions using 'samples' and 'out-of-sample prediction error' but does not specify exact training/validation/test split percentages, absolute counts, or reference predefined splits.
Hardware Specification No Each figure took several minutes to generate using a standard laptop. This does not provide specific hardware models.
Software Dependencies Yes The simulations were done using Python 3.9 and the Gurobi library [17].
Experiment Setup Yes In Figure 1 we show that Adapted BP() significantly outperforms standard Basis Pursuit ... on a simple example with n = 1000 variables, dℓ= 10 sparse approximate dependencies, and a ground truth regressor with sparsity t = 13. The covariates X1:1000 are all independent N(0, 1) except for 10 disjoint triplets... The (noiseless) responses are y = 6.25(X1 X2) + 2.5X3 + 1/10 P1000 i=991 Xi.