Rumor Detection on Social Media with Bi-Directional Graph Convolutional Networks

Authors: Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, Junzhou Huang549-556

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Encouraging empirical results on several benchmarks confirm the superiority of the proposed method over the state-of-the-art approaches. Experimental results on three real-world datasets show that our Bi-GCN method outperforms several state-of-the-art approaches; and for the task of early detection of rumors, which is quite crucial to identify rumors in real time and prevent them from spreading, Bi-GCN also achieves much higher effectiveness.
Researcher Affiliation Collaboration Tian Bian,1,2 Xi Xiao,1 Tingyang Xu,2 Peilin Zhao,2 Wenbing Huang,2 Yu Rong,2 Junzhou Huang2 1Tsinghua University 2Tencent AI Lab
Pseudocode No The paper describes the steps of its model but does not include a formally labeled pseudocode block or algorithm.
Open Source Code No The paper mentions using and links to third-party libraries (scikit-learn, Keras, Pytorch) for implementing models, but it does not provide or link to the source code for its own proposed method.
Open Datasets Yes We evaluate our proposed method on three real-world datasets: Weibo (Ma et al. 2016), Twitter15 (Ma, Gao, and Wong 2017), and Twitter16 (Ma, Gao, and Wong 2017).
Dataset Splits Yes To make a fair comparison, we randomly split the datasets into five parts, and conduct 5-fold cross-validation to obtain robust results. The training process is iterated upon 200 epochs, and early stopping (Yao, Rosasco, and Caponnetto 2007) is applied when the validation loss stops decreasing by 10 epochs.
Hardware Specification No The paper does not specify the hardware used to run the experiments, such as CPU or GPU models, or memory details.
Software Dependencies No We implement DTC and SVM-based models with scikit-learn1; PPC RNN+CNN with Keras2; Rv NN and our method with Pytorch3. While it names software, it does not provide specific version numbers for these dependencies.
Experiment Setup Yes The dimension of each node s hidden feature vectors are 64. The dropping rate in Drop Edge is 0.2 and the rate of dropout is 0.5. The training process is iterated upon 200 epochs, and early stopping (Yao, Rosasco, and Caponnetto 2007) is applied when the validation loss stops decreasing by 10 epochs.