Fast and Sample Efficient Multi-Task Representation Learning in Stochastic Contextual Bandits

Authors: Jiabin Lin, Shana Moothedath, Namrata Vaswani

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We presented experiments and compared the performance of our algorithm against benchmark algorithms.
Researcher Affiliation Academia 1 Department of Electrical and Computer Engineering, Iowa State University, Ames IA 50011-1250, USA.
Pseudocode Yes Algorithm 1 LRRL-Alt GDMin Algorithm
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes MNIST data: We used the MNIST dataset to validate the performance of our algorithm when implemented with real-world data.
Dataset Splits No The paper describes an online learning process with epochs and sample-splitting for internal algorithm updates, but does not specify traditional train/validation/test dataset splits for model evaluation.
Hardware Specification No No specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments are provided.
Software Dependencies No The paper states 'All experiments were conducted using Python.' but does not provide specific version numbers for Python or any other key software components.
Experiment Setup Yes We set the parameters as d = 100, and K = 5. ... We considered a noise model with a mean of 0 and a variance of 10^-6 for the bandit feedback noise. ... We ran for L = 2000 GD iterations. We considered M = 4 epochs each with 50 data samples each.