On the Cost Complexity of Crowdsourcing

Authors: Yili Fang, Hailong Sun, Pengpeng Chen, Jinpeng Huai

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have verified our work theoretically and empirically. Through a set of case studies, we have verified our method through theoretical analysis and experimental evaluation on real-world datasets.
Researcher Affiliation Academia Yili Fang, Hailong Sun , Pengpeng Chen, Jinpeng Huai SKLSDE, School of Computer Science and Engineering, Beihang University, Beijing, China Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It primarily uses mathematical derivations and formulas.
Open Source Code No The paper does not include any statements about releasing open-source code or links to a code repository.
Open Datasets Yes Here we present the experimental analysis of error rates with two real-world crowdsourcing datasets: dog [Zhou et al., 2012] and temp [Snow et al., 2008].
Dataset Splits No The paper mentions using "dog" and "temp" datasets but does not provide specific details on training, validation, or test splits, or any cross-validation setup.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list any specific software dependencies or their version numbers used in the experiments or for implementation.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters, learning rates, batch sizes, or optimizer settings. It describes varying the number of tasks processed by workers in the empirical analysis but lacks typical machine learning training configurations.