Contextual Combinatorial Multi-armed Bandits with Volatile Arms and Submodular Reward

Authors: Lixing Chen, Jie Xu, Zhuo Lu

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance of CC-MAB is evaluated by experiments conducted on a realworld crowdsourcing dataset, and the result shows that our algorithm outperforms the prior art.
Researcher Affiliation Academia Lixing Chen, Jie Xu Department of Electrical and Computer Engineering University of Miami Coral Gables, FL 33146 {lx.chen, jiexu}@miami.edu; Zhuo Lu Department of Electrical Engineering University of South Florida Tampa, FL 33620 zhuolu@usf.edu
Pseudocode Yes Algorithm 1 Greedy Algorithm; Algorithm 2 CC-MAB
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We evaluate the performance of CC-MAB in a crowdsourcing application based on the data published by Yelp1. The dataset provides abundant real-world traces for emulating spatial crowdsourcing tasks where Yelp users are assigned with tasks to review local businesses. The dataset contains 61,184 businesses, 36,6715 users and 1,569,264 reviews. 1Yelp dataset challenge: www.yelp.com/dataset/challenge
Dataset Splits No The paper mentions dividing the time span into daily instances, but it does not provide specific details on training, validation, or test dataset splits or cross-validation setup.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, nor does it mention specific hardware models or cloud resources.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper describes the general experiment setting, including the dataset and reward function, and mentions some parameters like budget B and maximum arms. However, it does not provide specific experimental setup details such as concrete hyperparameter values for training models, optimizer settings, or other system-level configurations.