Thresholded Lasso Bandit

Authors: Kaito Ariu, Kenshi Abe, Alexandre Proutiere

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through numerical experiments, we confirm that our algorithm outperforms existing methods. ... In this section, we empirically evaluate the TH Lasso bandit algorithm.
Researcher Affiliation Collaboration 1EECS and Digital Futures, KTH Royal Institute of Technology, Stockholm, Sweden 2Cyberagent, Inc., Tokyo, Japan.
Pseudocode Yes Algorithm 1 TH Lasso Bandit
Open Source Code Yes An implementation of our method is available at https://github.com/Cyber Agent AILab/ thresholded-lasso-bandit.
Open Datasets Yes We use the R6A dataset3 that contains a part of the user view/click log for articles displayed on the Yahoo! s Today Module. 3https://webscope.sandbox.yahoo.com
Dataset Splits No For the real-world dataset, the paper states: "we subsampled the data so that each event is used with probability 0.9 for each instance." However, it does not provide specific training, validation, or test dataset splits for reproducibility, nor does it define such splits for the synthetic data generation.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions using "Lasso Bandit," "Doubly-Robust Lasso bandit," and "SA Lasso bandit" algorithms and refers to hyperparameters tuned from GitHub repositories, but it does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow, etc.).
Experiment Setup Yes For the SA Lasso bandit and TH Lasso bandit algorithms, we tune the hyperparameter λ0 in [0.01, 0.5] to roughly optimize the algorithm performance when K = 2, d = 1000, Amax = 10, and s0 = 5. As a result, we set λ0 = 0.16 for SA Lasso bandit, and set λ0 = 0.02 for TH Lasso bandit.