Unveiling Implicit Deceptive Patterns in Multi-Modal Fake News via Neuro-Symbolic Reasoning
Authors: Yiqi Dong, Dongxiao He, Xiaobao Wang, Youzhu Jin, Meng Ge, Carl Yang, Di Jin
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world datasets demonstrate that our NSLM achieves the best performance in fake news detection while providing insightful explanations of deceptive patterns. |
| Researcher Affiliation | Academia | 1School of New Media and Communication, Tianjin University, Tianjin, China, 2Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China, 3Beijing-Dublin International College, Beijing University of Technology, Beijing, China, 4Saw Swee Hock School of Public Health, National University of Singapore, Singapore, 5Department of Computer Science, Emory University, Georgia, USA. |
| Pseudocode | No | The paper describes the model architecture and training process but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code for their proposed method, nor does it include a link to a code repository. |
| Open Datasets | Yes | We evaluate the proposed NSLM on two real-world datasets called Fakeddit (Nakamura, Levy, and Wang 2020) and Weibo (Jin et al. 2017), respectively. |
| Dataset Splits | No | Fakeddit comprises 31,011 news samples for training and 6,181 for testing, whereas Weibo consists of 5,455 news samples for training and 1,493 for testing. The paper provides training and testing split sizes, but no explicit validation split information or percentages. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or other specific computing resources used for the experiments. |
| Software Dependencies | No | The paper mentions models like Inception V3, ResNet34, and RoBERTa, and optimizers like Adam, but does not provide specific version numbers for any software libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | In our NSLM, we adopt a randomly sampled Gaussian distribution as the prior distribution p(z | x). We set the dimension d to 256 and the tradeoff weight ยต to 0.5. During training, we use a batch size of 8, while for testing, the batch size is set to 16. We employ a learning rate of 1e-5 for both datasets. The Fakeddit dataset allows a maximum text length of 45 and an image contexts length of 12, while for the Weibo dataset, the respective maximum lengths are 110 for text and 10 for image contexts. The whole model is trained with the Adaptive Moment Estimation (Adam) optimizer (Kingma and Ba 2014). |