Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Regret in Online Recommendation Systems

Authors: Kaito Ariu, Narae Ryu, Se-Young Yun, Alexandre Proutiere

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper proposes a theoretical analysis of recommendation systems in an online setting, where items are sequentially recommended to users over time. We investigate various structural assumptions on these probabilities: we derive for each structure regret lower bounds, and devise algorithms achieving these limits. We illustrate the performance of our algorithms through experiments presented in the appendix.
Researcher Affiliation Academia Kaito Ariu KTH Stockholm, Sweden EMAIL Narae Ryu KAIST Daejeon, South Korea EMAIL Se-Young Yun KAIST Daejeon, South Korea EMAIL Alexandre Proutière KTH Stockholm, Sweden EMAIL
Pseudocode Yes Due to space constraints, we present the pseudo-codes of our algorithms, all proofs, numerical experiments, as well as some insightful discussions in the appendix.
Open Source Code No No explicit statement or link indicating the public availability of the source code for the described methodology was found.
Open Datasets No The paper mentions "numerical experiments presented in the appendix" but does not specify any publicly available datasets or provide access information for them in the main text.
Dataset Splits No No specific information about dataset splits (e.g., training, validation, test percentages or counts) was found in the main text.
Hardware Specification No No specific hardware (e.g., GPU models, CPU types, or cloud resources) used for running experiments was mentioned.
Software Dependencies No No specific software names with version numbers were mentioned as dependencies for the experiments or implementation.
Experiment Setup No The paper describes algorithm phases and some parameters for item selection but does not provide specific experimental setup details such as hyperparameters (learning rates, batch sizes, epochs) or optimizer settings.