Unified Evidence Enhancement Inference Framework for Fake News Detection
Authors: Lianwei Wu, Linyong Wang, Yongqiang Zhao
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three public datasets confirm the effectiveness and interpretability of our UEEI. and 4 Experiments In this section, we endeavor to answer the following questions: Q1: Could UEEI achieve more excellent performance? Q2: Does each layer contribute to improving detection? Q3: How much does the exploration of potentially suspicious fragments of news boost model performance? Q4: What are the advantages of our multi-view coherence inference compared to existing reasoning ways? Q5: Is the obtained evidence reasonable and interpretable? |
| Researcher Affiliation | Academia | Lianwei Wu1,2 , Linyong Wang1,2 , Yongqiang Zhao3 1ASGO, School of Computer Science, Northwestern Polytechnical University, Xi an, China 2Research & Development Institute of Northwestern Polytechnical University in Shenzhen 3School of Computer Science, Peking University, Beijing, China |
| Pseudocode | No | The paper describes methods and processes but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' block with structured code-like steps. |
| Open Source Code | No | The paper does not contain any statement regarding the release of open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Politi Fact and Gossip Cop are two English datasets [Shu et al., 2020] and Weibo is a Chinese dataset [Liu et al., 2018]. |
| Dataset Splits | No | In dataset partitioning, we hold out 75% of the news as training set and the remaining 25% as test set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using 'BERT model' and 'BART' as well as 'Adam optimizer', but it does not specify version numbers for any software dependencies or libraries like Python, PyTorch, or TensorFlow. |
| Experiment Setup | Yes | For model configuration, we adopt BERT-base as embeddings. The embedding size d is 768. K clusters of viewpoints are from [3, 4, 5], and α is 0.65. K in the top-K words is 10; In self-attention networks, attention heads and blocks are set to 6 and 4, respectively, and the dropout of multi-head attention is 0.6. We adopt Adam optimizer as the model optimizer. The learning rate is uniformly set to 10 4. We utilize L2-regularizers with the fully-connected layers and the mini-batch size is 32. |