Incentivizing High Quality User Contributions: New Arm Generation in Bandit Learning

Authors: Yang Liu, Chien-Ju Ho

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide simulation results to demonstrate the intuitions of the design of Rand UCB. Rand UCB has two advantages over the standard UCB algorithm. First, it collects a good amount of content in the early stages (pt = min{1, M/t}) and gradually decreases the probability of adding newly contributed content into exploration phase. This allows the platform to obtain a good enough content early with high probability, while not sacrificing on keeping exploring new content. Second, as shown in Theorem 5, Rand UCB incentivizes high quality contributions. This naturally improves the algorithm performance, since the arms are better. Below we use simulations to demonstrate the effects of these two components.
Researcher Affiliation Academia Yang Liu yangl@seas.harvard.edu Harvard University Chien-Ju Ho chienju.ho@wustl.edu Washington University in St. Louis
Pseudocode Yes Algorithm 1 Rand UCB Input: {pt : t = 1, . . . , T} for t = 1, , T do select arms to display according to UCB1. if a new arm is contributed then add the new arm in A(t + 1) with probability pt end if end for
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper states, 'We also assume the quality distribution F is an uniform distribution in [0, 1]' for its simulations. This describes a synthetic data distribution for the simulation, but it does not refer to a publicly available or open dataset with access information or a formal citation.
Dataset Splits No The paper describes simulation settings, such as 'We run each algorithm 100 times', but it does not specify explicit training, validation, or test dataset splits in terms of percentages or sample counts.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the simulations.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks) used for the simulations.
Experiment Setup Yes We set K = 1, T = 10, 000, and M = 10. We also assume the quality distribution F is an uniform distribution in [0, 1]. We run each algorithm 100 times and plot the mean performance in Figure 1.