Contextual-Bandit Based Personalized Recommendation with Time-Varying User Interests

Authors: Xiao Xu, Fang Dong, Yanghua Li, Shaojian He, Xin Li6518-6525

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical experiments are conducted on real-world datasets to verify the advantages of the proposed learning algorithms against baseline ones in both settings.
Researcher Affiliation Collaboration 1Cornell University, Ithaca, NY, USA 2Alibaba Group, Hangzhou, Zhejiang, China
Pseudocode Yes Algorithm 1 Piecewise-Stationary Lin UCB under the Disjoint Payoff Model (PSLin UCB-Disjoint); Algorithm 2 Piecewise-Stationary Lin UCB under Hybrid Payoff Model (PSLin UCB-Hybrid)
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit release statement, or mention of code in supplementary materials) for the source code of the described methodology.
Open Datasets Yes The first real-world dataset is a collection of user-visit log information from Yahoo! front page, which is widely used for algorithm evaluation in the contextual bandit setting (Li et al. 2010a; 2011). The second dataset is extracted from the Last.fm online music system, which is made available on the Het Rec 2011 workshop.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions hyperparameters (α, ω, δ) as inputs to the algorithms, and states that sensitivity analysis results are in the appendix, but it does not provide the concrete values used for these hyperparameters or other training configurations in the main text for the reported experiments.