Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP
Authors: Satyen Kale, Zohar Karnin, Tengyuan Liang, Dávid Pál
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We design polynomial-time algorithms for online sparse linear regression for two models for the sequence (x1, y1), (x2, y2), . . . , (x T , y T ). The algorithm for the agnostic setting relies on the theory of submodular optimization. The main result in this section provides a logarithmic regret bound under the following assumptions |
| Researcher Affiliation | Collaboration | 1Google Research, New York. 2Amazon, New York. 3University of Chicago, Booth School of Business, Chicago. 4Yahoo Research, New York. |
| Pseudocode | Yes | Algorithm 1 Dantzig Selector for POSLR |
| Open Source Code | No | The paper does not contain any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | This is a theoretical paper that defines abstract sequences of data (x1, y1), (x2, y2), ..., (xT, yT) and does not specify or use any publicly available datasets for empirical training or evaluation. |
| Dataset Splits | No | This is a theoretical paper and does not describe experiments that involve training, validation, or test data splits. |
| Hardware Specification | No | This is a theoretical paper focusing on algorithm design and analysis; it does not mention any specific hardware used for running experiments. |
| Software Dependencies | No | The paper describes algorithms and their theoretical properties but does not mention specific software dependencies with version numbers required for replication. |
| Experiment Setup | No | This is a theoretical paper and does not include details about an experimental setup, hyperparameters, or training configurations. |