Dynamic Task Allocation Algorithm for Hiring Workers that Learn

Authors: Shengying Pan, Kate Larson, Josh Bradshaw, Edith Law

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using a medical time series classification task as a case study, we conducted experiments to show that workers performance does improve with experience and that it is possible to model and predict their learning rate. Through both simulation and real-world crowdsourcing data, we show that our hiring procedure can lead to high-accuracy outcomes at lower cost compared to other mechanisms.
Researcher Affiliation Academia Shengying Pan University of Waterloo Canada s5pan@uwaterloo.ca Kate Larson University of Waterloo Canada kate.larson@uwaterloo.ca Josh Bradshaw University of Waterloo Canada jabradsh@uwaterloo.ca Edith Law University of Waterloo Canada edith.law@uwaterloo.ca
Pseudocode No The paper describes algorithms and models using mathematical equations and prose (e.g., MDP definition, MC-VOI modifications), but it does not include a distinct, structured block of pseudocode or a clearly labeled algorithm figure.
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets Yes All the EEG recordings and ground truth sleep spindle identifications used in our experiment come from Devuyst s DREAMS Sleep Spindle Database [Devuyst et al., 2011].
Dataset Splits No The paper mentions assigning 'the first 20 tasks/spindles as training tasks' and later states 'After removing the first 20 sleep spindles, there are a total of 81 tasks left for testing.' It does not explicitly mention a separate 'validation' dataset split.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, RAM, cloud instances) used for running the experiments or simulations.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., Python 3.x, PyTorch 1.x, specific solvers with versions).
Experiment Setup Yes For all experiments, when scoring workers we set δnow = δfuture = 0.5 and n to be the number of tasks remaining. We set k = 3 for both Random K and Top K so they are not hiring excessive workers and there is no need for any tie breaking. We set the horizon of Dynamic Hiring, l, equal to 5 so it can explore a bit more at the beginning. For the reward functions we set β = 7.0, bt = 0.85, and γ = 100.0. We ran the experiments 30 times and reported the average performance.