Cost-Saving Effect of Crowdsourcing Learning

Authors: Lu Wang, Zhi-Hua Zhou

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results provide an understanding about how to allocate crowd labels efficiently, and are verified empirically.In the following we start with preliminaries and then present our main results, followed by experiments and conclusions.
Researcher Affiliation Academia Lu Wang and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 210023, China {wangl, zhouzh}@lamda.nju.edu.cn
Pseudocode No The paper contains mathematical derivations and definitions but no structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes Two real-world datasets 1 are adopted, of which the dataset Mushrooms has 112 features and 8124 instances, while the dataset Splice has 60 features and 3175 instances. For each dataset, 30% of instances are used as testing data and the others as a pool from which instances are sampled for training.
Dataset Splits Yes For each dataset, 30% of instances are used as testing data and the others as a pool from which instances are sampled for training.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for the experiments.
Software Dependencies No The paper mentions "J48 decision trees in Weka [Witten and Frank, 1999] are used in our experiments," but it does not provide specific version numbers for Weka or J48.
Experiment Setup No The paper mentions the allocation schemes for crowd labels (e.g., "numbers of crowd labels per instance are [1, 5, 9, 15] respectively and the total number of crowd labels is fixed to be 1800"), but it does not specify hyperparameters or detailed training configurations for the J48 decision trees (e.g., learning rate, batch size, optimizer settings).