Neural User Response Generator: Fake News Detection with Collective User Intelligence

Authors: Feng Qian, Chengyue Gong, Karishma Sharma, Yan Liu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on one available dataset and a larger dataset collected by ourselves. Experimental results show that TCNN-URG outperforms the baselines based on prior approaches that detect fake news from article text alone.
Researcher Affiliation Academia Feng Qian 1, Chengyue Gong 1, Karishma Sharma 2, Yan Liu2 1 Peking University 2 University of Southern California nickqian@pku.edu.cn, cygong@pku.edu.cn, krsharma@usc.edu, yanliu.cs@usc.edu
Pseudocode No The paper describes the model architecture and training procedures in text and diagrams (Figures 2 and 3), but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper states, 'We plan to publish this dataset along with the collected list of websites' but does not explicitly mention making the source code for the methodology publicly available.
Open Datasets Yes We chose to conduct experiments on a public Weibo (A Chinese social network) dataset [Ma et al., 2016]. ... We plan to publish this dataset along with the collected list of websites.
Dataset Splits Yes We use ten-fold cross validation for evaluation of the model.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper states 'We build and train the model using Tensor Flow' and mentions using 'Stanford Core NLP tool' and 'Stanford Parser', but does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes In the experiments, we set the word embedding dimension to be 128 and filter size to 2,4,5. For each filter size, 64 filters are initialized randomly and trained. When generating user responses from URG, we use the average of 100 samples to get accurate estimates of the expectation over the distribution of user responses generated. For training, we use a mini-batch size of 64 and articles of similar length are organized in the same batch.