MFAN: Multi-modal Feature-enhanced Attention Networks for Rumor Detection
Authors: Jiaqi Zheng, Xi Zhang, Sanchuan Guo, Quan Wang, Wenyu Zang, Yongdong Zhang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments 5.1 Datasets 5.2 Baselines 5.3 Implementation Details 5.4 Results and Discussion Table 3 shows the performance of the comparison methods. On both datasets, our model MFAN significantly outperforms all the other approaches in all the metrics. |
| Researcher Affiliation | Collaboration | Jiaqi Zheng1 , Xi Zhang1 , Sanchuan Guo1 , Quan Wang1 , Wenyu Zang2 and Yongdong Zhang3,1 1Key Laboratory of Trustworthy Distributed Computing and Service (Mo E), Beijing University of Posts and Telecommunications, China 2China Electronics Corporation 3University of Science and Technology of China |
| Pseudocode | No | The paper describes the proposed method but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about open-sourcing the code or links to a code repository for the methodology. |
| Open Datasets | Yes | We evaluate our model on two real-world datasets: Weibo [Song et al., 2019] and PHEME [Zubiaga et al., 2017]. |
| Dataset Splits | Yes | We split the datasets for training, validation, and testing with a ratio of 7:1:2. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam [Kingma and Ba, 2014]' for optimization but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | The number of heads H is set to 8. λc and λa are set to 2.15 and 1.55. The learning rate used in the training process is 0.002. |