HirePeer: Impartial Peer-Assessed Hiring at Scale in Expert Crowdsourcing Markets
Authors: Yasmine Kotturi, Anson Kahng, Ariel Procaccia, Chinmay Kulkarni2577-2584
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper reports on three studies that investigate both the costs and the beneļ¬ts to workers and employers of impartial peer-assessed hiring. |
| Researcher Affiliation | Academia | Yasmine Kotturi,1 Anson Kahng,2 Ariel D. Procaccia,2 Chinmay Kulkarni1 1Human-Computer Interaction Institute 2Computer Science Department Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh PA 15213 {ykotturi, akahng, arielpro, chinmayk}@cs.cmu.edu |
| Pseudocode | No | The paper describes a system workflow and uses algorithms but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it state that the code is open-source. |
| Open Datasets | No | The paper conducts experiments on Amazon Mechanical Turk, where data is generated by participants (e.g., product reviews, advice pieces). While it describes the data collection, it does not provide concrete access information (link, DOI, repository, or formal citation) for a publicly available or open dataset that can be accessed externally to reproduce the data. |
| Dataset Splits | No | The paper describes experimental setups with participants on Amazon Mechanical Turk but does not provide specific training, validation, or test dataset splits in terms of percentages or sample counts for data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We conducted a between-subjects randomized experiment in early 2017 on Amazon Mechanical Turk (AMT) to test which of three communications of an impartial mechanism minimized strategic behavior compared to our control condition (n = 170). We used AMT as an experimental setting for two reasons: first, it can be challenging to discern strategic behavior from low quality work on AMT (Ipeirotis, Provost, and Wang 2010), providing a rich experimental setting to evaluate decision making; second, AMT is a representative sample of a typical online labor market, and has been shown to be a reliable environment for behavioral studies (Mason and Suri 2012). Participants were randomly assigned to one of four between-subjects conditions. The control condition made no mention of an impartial mechanism, and instead simply reminded participants to read instructions carefully (this has been shown in previous crowd work to have no effect). The other three conditions described the algorithm as above (with consequences, policing, or responsibility externalization). We displayed each in a reminder (in bold) at the bottom of the task instructions on AMT, depending on which condition a participant was randomly assigned. We also included this reminder a second time, immediately before the task. |