Optimal Posted-Price Mechanism in Microtask Crowdsourcing

Authors: Zehong Hu, Jie Zhang

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also conduct extensive experiments using real price data to verify the advantages and practicability of our mechanism. To empirically validate the advantages of our mechanism over existing ones, we conduct extensive experiments using three popular worker models as well as the real-world price data collected from MTurk, a widely-adopted microtask crowdsourcing platform.
Researcher Affiliation Collaboration Zehong Hu, Jie Zhang Rolls-Royce@NTU Corporate Lab, School of Computer Science and Engineering Nanyang Technological University, Singapore huze0004@e.ntu.edu.sg
Pseudocode Yes Algorithm 1: Optimal MAB Algorithm (OA-MAB) and Algorithm 2: Optimal Posted-Price Mechanism (OPPM) are presented with structured steps.
Open Source Code No The paper does not provide any statement or link indicating that its source code is publicly available.
Open Datasets Yes The data (Fig. 1e) collected using MTurk-Tracker [Difallah et al., 2015]
Dataset Splits No The paper does not explicitly specify training, validation, or test dataset splits (e.g., percentages, counts, or predefined splits).
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup Yes The settings of our experiments are shown in Table 1. This table details the worker models, price ranges, and B/N ratios for each experiment (e.g., 'Expt. #1 Private Cost Model: ci U[5, 200] Price Range [5, 200] B/N 40', 'Expt. #2 Discrete Choice Model: ... Mi = 2000'). It also states: 'The price unit in all experiments is the cent. The expected revenue is estimated with the mean of 100 runs.'