Bonus or Not? Learn to Reward in Crowdsourcing
Authors: Ming Yin, Yiling Chen
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on Amazon Mechanical Turk show that our approach leads to higher utility for the requester than fixed and random bonus schemes do. Simulations on synthesized data sets further demonstrate the robustness of our approach against different worker population and worker behavior in improving requester utility. |
| Researcher Affiliation | Academia | Ming Yin Harvard University Cambridge MA, USA mingyin@fas.harvard.edu Yiling Chen Harvard University Cambridge MA, USA yiling@seas.harvard.edu |
| Pseudocode | No | The paper describes algorithms (n-step look-ahead, MLS-MDP, Q-MDP) but does not present them in pseudocode blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code. |
| Open Datasets | No | In the first phase, we collect a training data set by recruiting 50 MTurk workers to participate in our experiment. For each of the 9 tasks that a worker completes in the HIT, we randomly set it as a bonus task with a 20% chance; whether that task is a bonus task and whether the worker submits a high-quality answer to it (i.e. finds out the target word at least 9 times) is recorded. |
| Dataset Splits | No | The paper describes a two-phase experiment with a training phase and a testing phase, but no explicit validation dataset split is mentioned. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | Specifically, we run the expectation-maximization algorithm with 100000 random restarts, and each run is terminated after convergence or 500 iterations, whichever is reached earlier. In searching for a parsimonious model, we experiment on a range of values for the number of hidden states (K = 1 7) to train different IOHMMs, and the IOHMM with the maximized Bayesian information criterion (BIC) score [Schwarz and others, 1978] is selected to be used in the second phase. In our experiment, K = 2 for the selected IOHMM. The utility parameters we use in the experiment are wh = 0.15, wl = 0 and c = 0.052. |