Natural Language-centered Inference Network for Multi-modal Fake News Detection
Authors: Qiang Zhang, Jiawei Liu, Fanrui Zhang, Jingyi Xie, Zheng-Jun Zha
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our CFND dataset is challenging and the proposed NLIN outperforms state-of-the-art methods. |
| Researcher Affiliation | Academia | University of Science and Technology of China, China {zq 126, zfr888, hsfzxjy}@mail.ustc.edu.cn, {jwliu6, zhazj}@ustc.edu.cn |
| Pseudocode | No | The paper describes the proposed architecture and methodology in narrative text and figures (Figure 3) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the proposed methodology is openly available. |
| Open Datasets | No | To support the research in the field of multi-modal fake news detection, we produce a challenging multi-modal Chinese Fake News Detection (CFND) dataset, which is collected from multiple platforms, contains news from multiple domains. The paper mentions creating a new dataset (CFND) and using existing ones (Pheme, Weibo) but does not provide explicit public access links or repository names for any of them. |
| Dataset Splits | Yes | Furthermore, we divide the whole dataset into training set, validation set and test set with the ratio of 60%, 20% and 20% respectively, and maintaining the proportion of real and fake news in each set. |
| Hardware Specification | Yes | We implement all models in Py Torch and experimented on a Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not provide a specific version number for it or any other software libraries. |
| Experiment Setup | Yes | For the unified-modal context encoder, the input lengths for both the visual text input Tv and the news text input Tt are set to a maximum of 200 words. For the multi-modal mapping network, the number of blocks is 2 and the length of the learnable prompt Pm is 16. For other hyperparameters, we set the batch size to 16, the number of epochs to 60, the learning rate to 5e-4, employ the Adam optimizer, and apply the learning rate warm-up strategy. |