Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs

Authors: Jonathan Kelner, Frederic Koehler, Raghu Meka, Dhruv Rohatgi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove a stronger lower bound that rules out randomized preconditioners. For an appropriate covariance matrix, we construct a single signal distribution on which any invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new robustness result in compressed sensing. In particular, we study recovering a sparse signal when a few measurements can be erased adversarially.
Researcher Affiliation Academia Jonathan A. Kelner MIT Frederic Koehler Stanford Raghu Meka UCLA Dhruv Rohatgi MIT
Pseudocode No The paper is theoretical and focuses on proofs and lower bounds. It does not provide any pseudocode or algorithm blocks for its own methods.
Open Source Code No The paper does not contain any statements about releasing code or links to a source code repository.
Open Datasets No The paper is theoretical and does not involve empirical training or evaluation on datasets. Therefore, no dataset access information is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with data. No dataset split information (training, validation, test) is provided.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any software implementation or dependencies with specific version numbers.
Experiment Setup No The paper is theoretical and focuses on proofs and lower bounds. It does not describe any experimental setup details such as hyperparameters or training configurations.