Weak Supervision for Fake News Detection via Reinforcement Learning

Authors: Yaqing Wang, Weifeng Yang, Fenglong Ma, Jin Xu, Bin Zhong, Qiang Deng, Jing Gao516-523

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested the proposed framework on a large collection of news articles published via We Chat official accounts and associated user reports. Extensive experiments on this dataset show that the proposed We FEND model achieves the best performance compared with the state-of-the-art methods.
Researcher Affiliation Collaboration Yaqing Wang,1 Weifeng Yang,2 Fenglong Ma,3 Jin Xu,2 Bin Zhong,2 Qiang Deng,2 Jing Gao1* 1State University of New York at Buffalo, New York, USA 2Data Quality Team, We Chat, Tencent Inc., China 3Pennsylvania State University, Pennsylvania, USA
Pseudocode No The paper describes its methods verbally and with architectural diagrams (Figure 1) and mathematical equations, but it does not include structured pseudocode or an algorithm block labeled as such.
Open Source Code Yes Moreover, we will publicly release this dataset2 to the community to encourage further research on fake news detection with user reports. 2https://github.com/yaqingwang/We FEND-AAAI20
Open Datasets Yes Moreover, we will publicly release this dataset2 to the community to encourage further research on fake news detection with user reports. 2https://github.com/yaqingwang/We FEND-AAAI20
Dataset Splits Yes We split the fake news and real news into training and testing sets according to the post timestamp. The news in the training data were posted from March 2018 to September 2018, and testing dataset is from September 2018 to October 2018. ... Towards this end, we first extract a validation dataset from the whole labeled dataset. Note that all the trained model will test on this extracted validation dataset. ... The detailed statistics are shown in the Table 1. Labeled Training Fake 1220 Real 1220
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions the software framework 'Py Torch 1.2'.
Software Dependencies Yes We implement all the deep learning baselines and the proposed framework with Py Torch 1.2. For training models, we use Adam (Kingma and Ba 2014) in the default setting.
Experiment Setup Yes The 200 dimensional pre-trained word-embedding weights (Song et al. 2018) are used to initialize the parameters of the embedding layer. In the annotator, the weight wr R40 20. In the reinforced selector, ws1 R88 and ws2 R8 1. We set the bag size B same as mini-batch size, τ = 0.001 and K = 200. We use Adam (Kingma and Ba 2014) in the default setting. The learning rate α is 0.0001. We use mini-batch size of 100 and training epochs of 100.