Regret in Online Recommendation Systems

Authors: Kaito Ariu, Narae Ryu, Se-Young Yun, Alexandre Proutiere

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time. We investigate various structural assumptions on these probabilities: we derive for each structure regret lower bounds, and devise algorithms achieving these limits. We illustrate the performance of our algorithms through experiments presented in the appendix.
Researcher Affiliation Academia Kaito Ariu KTH Stockholm, Sweden ariu@kth.se Narae Ryu KAIST Daejeon, South Korea nrryu@kaist.ac.kr Se-Young Yun KAIST Daejeon, South Korea yunseyoung@kaist.ac.kr Alexandre Proutière KTH Stockholm, Sweden alepro@kth.se
Pseudocode Yes Due to space constraints, we present the pseudo-codes of our algorithms, all proofs, numerical experiments, as well as some insightful discussions in the appendix.
Open Source Code No No explicit statement or link indicating the public availability of the source code for the described methodology was found.
Open Datasets No The paper mentions "numerical experiments presented in the appendix" but does not specify any publicly available datasets or provide access information for them in the main text.
Dataset Splits No No specific information about dataset splits (e.g., training, validation, test percentages or counts) was found in the main text.
Hardware Specification No No specific hardware (e.g., GPU models, CPU types, or cloud resources) used for running experiments was mentioned.
Software Dependencies No No specific software names with version numbers were mentioned as dependencies for the experiments or implementation.
Experiment Setup No The paper describes algorithm phases and some parameters for item selection but does not provide specific experimental setup details such as hyperparameters (learning rates, batch sizes, epochs) or optimizer settings.