Graph Mining Meets Crowdsourcing: Extracting Experts for Answer Aggregation

Authors: Yasushi Kawase, Yuko Kuroki, Atsushi Miyauchi

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Computational experiments using synthetic and real-world datasets demonstrate that our proposed answer aggregation algorithms outperform state-of-the-art algorithms.
Researcher Affiliation Academia Yasushi Kawase1,3 , Yuko Kuroki2,3 , Atsushi Miyauchi3 1Tokyo Institute of Technology 2The University of Tokyo 3RIKEN AIP
Pseudocode Yes Algorithm 1: Peeling algorithm
Open Source Code No No explicit statement or link providing access to the source code for the described methodology is present in the paper.
Open Datasets Yes Table 2 summarizes six datasets that we use as real-world datasets. They were recently collected by Li et al. [2017] using Lancers, a commercial crowdsourcing platform in Japan.
Dataset Splits No The paper uses synthetic and real-world datasets, but it does not specify how these datasets were split into training, validation, and testing subsets, or describe any cross-validation setup.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as CPU or GPU models, or memory specifications.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, or frameworks) used in the experiments.
Experiment Setup Yes Throughout the experiments, we set s = 5. ... We set α N(1, 1) and β N(1, 1) as in Li et al. [2017]. ... We performed the sampling procedure with k = 5 for r = 100 times, as suggested by Li et al. [2017].