Optimality of Belief Propagation for Crowdsourced Classification

Authors: Jungseul Ok, Sewoong Oh, Jinwoo Shin, Yung Yi

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results suggest that BP is close to optimal for all regimes considered, while existing stateof-the-art algorithms exhibit suboptimal performances. In this section, we evaluate the performance of BP using both synthetic datasets and real-world Amazon Mechanical Turk datasets to study how our theoretical findings are demonstrated in practice.
Researcher Affiliation Academia EE Department, Korea Advanced Institute of Science and Technology, Daejeon 34141 South Korea IESE Department, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA
Pseudocode No The paper describes the iterative update rules for Belief Propagation using mathematical equations (6), (7), (8), (9) but does not present them in a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any concrete access information (specific link, explicit statement of code release, or mention of code in supplementary materials) for the source code of the methodology described in this paper.
Open Datasets Yes We use two real-world Amazon Mechanical Turk datasets from (Karger et al., 2011) and (Snow et al., 2008): SIM dataset and TEMP dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions using synthetic datasets and subsampling real datasets but no explicit splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes We terminate all algorithms that run in an iterative manner (i.e., all the algorithms except for MV) at the maximum of 100 iterations or with 10 5 message convergence tolerance, all results are averaged on 100 random samples.