Sparsity-Agnostic Lasso Bandit
Authors: Min-Hwan Oh, Garud Iyengar, Assaf Zeevi
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also comprehensively evaluate our proposed algorithm numerically and show that it consistently outperforms existing methods, even when the correct sparsity index is revealed to them but is kept hidden from our algorithm. |
| Researcher Affiliation | Academia | 1Seoul National University, Seoul, South Korea 2Columbia University, New York, NY, USA. |
| Pseudocode | Yes | Algorithm 1 SA LASSO BANDIT |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes generating features from distributions (multivariate Gaussian, uniform, elliptical) for its experiments but does not refer to or provide concrete access information for a specific, pre-existing publicly available dataset. |
| Dataset Splits | No | The paper is on contextual bandits, an online learning setting, and does not describe traditional train/validation/test dataset splits for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers used for its implementation or experiments. |
| Experiment Setup | No | The paper states that it conducts 20 independent runs for each experimental configuration and mentions the input parameter λ0 for SA LASSO BANDIT is derived as λ0 = 2σxmax, but it does not provide specific concrete numerical values for hyperparameters or other typical training configuration details like learning rates, batch sizes, or epochs. |