Towards Fine-Grained Reasoning for Fake News Detection

Authors: Yiqiao Jin, Xiting Wang, Ruichao Yang, Yizhou Sun, Wei Wang, Hao Liao, Xing Xie5746-5754

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our model outperforms the state-of-the-art methods and demonstrate the explainability of our approach.
Researcher Affiliation Collaboration Yiqiao Jin1*, Xiting Wang2 , Ruichao Yang3, Yizhou Sun1, Wei Wang1, Hao Liao4, Xing Xie2 1 University of California, Los Angeles, 2 Microsoft Research Asia, 3 Hong Kong Baptist University, 4 Shenzhen University
Pseudocode No The paper describes its methods using text and mathematical equations but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes To evaluate the performance of Finer Fact, we conduct experiments on two benchmark datasets, Politi Fact and Gossip Cop (Shu et al. 2020), which contain 815 and 7,612 news articles, and the social context information about the news, their labels provided by journalists and domain experts.
Dataset Splits Yes We conduct 5-fold cross validation and the average performance is reported.
Hardware Specification Yes Without evidence ranking, performing fine-grained reasoning on the same data causes the out of memory issue on NVIDIA Tesla V100.
Software Dependencies No The paper mentions the use of BERT, LDA, Kernel Graph Attention Network (KGAT), and APPNP without specifying their version numbers.
Experiment Setup Yes To choose the number of topics, we conducted a grid search within the range [2, 10] and picked the number that results in the smallest perplexity. BERT is fine-tuned during training.