An Interpretable Joint Graphical Model for Fact-Checking From Crowds

Authors: An Nguyen, Aditya Kharosekar, Matthew Lease, Byron Wallace

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation across two real-world datasets and three scenarios shows that: (1) joint modeling of sources, claims and crowd annotators in a PGM improves the predictive performance and interpretability for predicting claim veracity; and (2) our variational inference method achieves scalably fast parameter estimation, with only modest degradation in performance compared to Gibbs sampling.
Researcher Affiliation Academia An T. Nguyen,1 Aditya Kharosekar,1 Matthew Lease,1 Byron C. Wallace2 1University of Texas at Austin 2Northeastern University
Pseudocode No The paper describes algorithms (Gibbs sampling, Variational Inference) using mathematical equations and descriptive text, but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We share our web demo, model source code, and the 13K crowd labels we collected.1
Open Datasets Yes Data. We report results on two datasets. First, we use the Emergent dataset (Ferreira and Vlachos 2016)... For our second dataset, we use Snopes (Popat et al. 2017)5 for our transfer scenario.
Dataset Splits Yes Ferreira and Vlachos (2016) split the dataset into train and test sets of 240 and 60 claims... We further split their training set into our own training and validation sets of 180 and 60 claims, respectively.
Hardware Specification Yes In transferring to the Snopes dataset, Variational takes nearly 2 hours (on a 3.50GHz machine).
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes We explored setting these to values over {0, 1, 10} and we report results for the best configuration λc = 10, λs = 0 and λr = 1 for all methods.