AdaLinUCB: Opportunistic Learning for Contextual Bandits

Authors: Xueying Guo, Xiaoxiao Wang, Xin Liu

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, based on both synthetic and real-world dataset, we show that Ada Lin UCB significantly outperforms other contextual bandit algorithms, under large exploration cost fluctuations.
Researcher Affiliation Academia Xueying Guo , Xiaoxiao Wang and Xin Liu University of California, Davis guoxueying@outlook.com, {xxwa, xinliu}@ucdavis.edu
Pseudocode Yes Algorithm 1 Ada Lin UCB
Open Source Code Yes 1The supplementary material of this paper is available at: https: //github.com/xiaoxiao01/IJCAI19/blob/master/Supplementary.pdf
Open Datasets Yes We also test the performance of the algorithms using the data from Yahoo! Today Module. This dataset contains over 4 million user visits to the Today module in a ten-day period in May 2009 [Li et al., 2010]. For the variation factor, we use a real trace the sales of a popular store. It includes everyday turnover in two years [Rossman, 2015].
Dataset Splits No The paper mentions using a dataset but does not specify exact train/validation/test split percentages, sample counts, or explicitly reference predefined splits that would allow reproduction of data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers.
Experiment Setup Yes In all the algorithms, we set α = 1.5 to make a fair comparison.