Reinforced Adaptive Knowledge Learning for Multimodal Fake News Detection

Authors: Litian Zhang, Xiaoming Zhang, Ziyi Zhou, Feiran Huang, Chaozhuo Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposal is extensively evaluated over three popular datasets, and experimental results demonstrate the superiority of AKA-Fake. ... Our proposal consistently outperforms all baselines over three datasets, achieving +4.7% on Politi Fact, +2.4% on Gossip Cop, and +2.8% on Pheme in terms of accuracy. The superiority of AKA-Fake verifies the effectiveness of adaptive knowledge incorporation. ... In this section, ablation studies are conducted to investigate the importance of different modules.
Researcher Affiliation Academia 1 Beihang University, 2Jinan University, 3Beijing University of Posts and Telecommunications
Pseudocode No The paper describes various algorithms and processes (e.g., reinforcement learning, graph refinement), but it does not include any pseudocode blocks or sections explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the proposed methodology or provide links to a code repository.
Open Datasets Yes We conduct experiments on three popular datasets: Politi Fact, Gossip Cop (Shu et al. 2020), and Pheme (Qi et al. 2019), which contain 495, 15,707 and 2,099 news, respectively.
Dataset Splits Yes The dataset is randomly split into training, validation, and testing sets with a ratio of 7:1:2.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper mentions using 'CLIP (Radford et al. 2021)' and 'Adam (Kingma and Ba 2014) as the optimizer' but does not specify version numbers for these or other software libraries/dependencies.
Experiment Setup Yes The respective field size is set to [5, 3, 2]. We employ Adam (Kingma and Ba 2014) as the optimizer. The batch size is set as 64. The initial learning rate is set to 2e 4. The embedding size for entities, relations, and document is 1024.