What’s Hot in Crowdsourcing and Human Computation
Authors: Jeffrey Bigham
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this short paper, I highlight three topics that stand out to me as representing areas newly in the forefront of the field: Understanding the Crowd Worker Experience: crowdsourcing and human computation rely on workers, yet many of us in technical fields focused first on aspects of computational support for eliciting input, rather than on their experience during tasks. New work is shedding light on the worker experience, and is yielding insights to the who, how, where, and why of crowd work, which is likely to make crowd-powered systems more effective going forward (Martin et al. 2014; Gupta et al. 2014; Zyskowski et al. 2015). Broadening Definitions of The Crowd : recent work has also started to show how the lessons learned in microtask crowdsourcing can be applied to more broadly defined crowds. For example, social media, citizen science, learning at scale, and ever-expanding sources of paid online workers, can all be leveraged to find crowds with different interests, motivations, backgrounds, and skills (Ross et al. 2010; Christoforaki and Ipeirotis 2014; Kittur et al. 2013). Expert and Creative Work: crowdsourcing and human computation is increasingly recognized as being able to contribute to creative and expert work, e.g., helping to explain why a joke is funny (Lin et al. 2014), or amplifying one s creative ability by providing skilled assistance (Kulkarni et al. 2014). Integrating computational support and mediation, in the form of AI planners and prompting methods, can even reduce the overhead involved in organizing and coordinating teams of expert workers (Retelny et al. 2014). |
| Researcher Affiliation | Academia | Jeffrey P. Bigham Human-Computer Interaction and Language Technology Institutes Carnegie Mellon University Pittsburgh, PA 15217 |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is a review/commentary and does not describe a new methodology for which open-source code would be provided. There is no mention of source code related to the paper's content. |
| Open Datasets | No | This paper is a review/commentary and does not present new experiments. Therefore, it does not use any datasets for training or provide access information for a publicly available dataset. |
| Dataset Splits | No | This paper is a review/commentary and does not present new experiments. Therefore, it does not provide specific dataset split information (like train/validation/test splits) for reproducibility. |
| Hardware Specification | No | This paper is a review/commentary and does not present new experiments. Therefore, it does not provide specific hardware details used for running experiments. |
| Software Dependencies | No | This paper is a review/commentary and does not present new experiments. Therefore, it does not provide specific ancillary software details with version numbers. |
| Experiment Setup | No | This paper is a review/commentary and does not present new experiments. Therefore, it does not provide specific experimental setup details such as hyperparameters or training configurations. |