Interpretable Rumor Detection in Microblogs by Attending to User Interactions
Authors: Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, Jing Jiang8783-8790
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model based on two rumour detection data sets (i) Twitter15 and Twitter16 and (ii) PHEME 5 events data set. We show that our best models outperform current state-of-the-art models for both data sets. |
| Researcher Affiliation | Academia | Ling Min Serena Khoo, Hai Leong Chieu DSO National Laboratories 12 Science Park Drive Singapore 118225 {klingmin, chaileon}@dso.org.sg; Zhong Qian, Jing Jiang Singapore Management University 80 Stamford Road Singapore 178902 qianzhongqz@163.com, jingjiang@smu.edu.sg |
| Pseudocode | No | The paper describes the models and their components, but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We conducted experiments on two rumor detection data sets, namely the Twitter15 and Twitter16 data, and the PHEME 5 data. |
| Dataset Splits | Yes | For the PHEME 5 data set, we follow the experimental setting of Kumar and Carley (2019) in using event wise crossvalidation. We used the original splits released by (Ma, Gao, and Wong 2018b) to split our data. |
| Hardware Specification | No | The paper only states 'due to memory limitations of the GPUs used' without providing specific hardware details such as GPU models (e.g., NVIDIA V100), CPU types, or memory sizes. |
| Software Dependencies | No | The paper mentions 'GLOVE 300d' and 'BERT' embeddings, and 'ADAM optimizer', but does not provide specific version numbers for any software libraries or frameworks used (e.g., PyTorch 1.9, TensorFlow 2.x). |
| Experiment Setup | Yes | Our model dimension is 300 and the dimension of intermediate output is 600. We used 12 post-level MHA layers and 2 token-level MHA layers. For training of the model, we used the ADAM optimizer with 6000 warm start-up steps. We used an initial learning rate of 0.01 with 0.3 dropout. We used a batch size of 32 for PLAN and St A-PLAN, and 16 for St A-Hi TPLAN due to memory limitations of the GPUs used. |