KAN: Knowledge-aware Attention Network for Fake News Detection
Authors: Yaqian Dun, Kefei Tu, Chen Chen, Chunyan Hou, Xiaojie Yuan81-89
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three public datasets show that our model outperforms the state-of-the-art methods, and also validate the effectiveness of knowledge attention. |
| Researcher Affiliation | Academia | 1 College of Computer Science, Nankai University, Tianjin, China 2 Tianjin Key Laboratory of Network and Data Security Technology, Tianjin, China 3 School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, China |
| Pseudocode | No | The paper describes the model architecture and mathematical formulas but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In order to evaluate the performance of KAN, we conduct experiments on three benchmark datasets. The first two datasets Politi Fact and Gossip Cop are the benchmark datasets called Fake News Net (Shu et al. 2018, 2017). The third dataset is PHEME (Zubiaga, Liakata, and Procter 2017). |
| Dataset Splits | Yes | We hold out 10% of the news in each dataset as validation sets for tuning the hyper parameters, and for the rest of the news, we conduct 5-fold cross-validation and use Precision, Recall, F1, Accuracy and AUC as evaluation metrics. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU or GPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions software components and tools (e.g., Transformer Encoder, Adam optimizer, word2vec, Tag Me, Wikidata) but does not provide specific version numbers for these or for any programming languages or libraries used. |
| Experiment Setup | Yes | The word embedding and entity embedding is pre-trained on each dataset using word2vec (Mikolov et al. 2013) and the dimension of both word embeddings and entity embeddings are set as is 100. We use a one-layer Transformer Encoder as news and knowledge encoder. The Transformer FFN inner representation size is set to dff = 2048 and the number of attention heads h = 4. An Adam optimizer (Kingma and Ba 2015) is adopted to train KAN by optimizing the log loss and the dropout rate is set to 0.5. |