Bootstrapping Multi-View Representations for Fake News Detection

Authors: Qichao Ying, Xiaoxiao Hu, Yangming Zhou, Zhenxing Qian, Dan Zeng, Shiming Ge

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on typical fake news detection datasets prove that BMR outperforms state-of-the-art schemes.
Researcher Affiliation Academia Qichao Ying1, Xiaoxiao Hu1, Yangming Zhou1, Zhenxing Qian1*, Dan Zeng2, Shiming Ge3 1 Fudan University 2 Shanghai University 3 Chinese Academy of Sciences {qcying20@,xxhu21@m.,ymzhou21@m.,zxqian@}fudan.edu.cn, dzeng@shu.edu.cn, geshiming@iie.ac.cn
Pseudocode Yes Algorithm 1: Py Torch-style training code of BMR
Open Source Code No The paper does not provide an explicit statement or a link for the open-source code of the BMR methodology.
Open Datasets Yes We use Weibo (Jin et al. 2017), Gossip Cop (Shu et al. 2020a) and Weibo-21 (Nan et al. 2021) for training and testing.
Dataset Splits No The paper specifies training and testing splits (e.g., 'Weibo contains 3749 real news and 3783 fake news for training, 1000 fake news and 996 real news for testing.'), but does not explicitly provide details for a separate validation dataset split.
Hardware Specification Yes It costs around 12 hours to train BMR for 50 epochs on a single NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions models like 'mae-pretrain-vit-base' and 'bert-base-chinese/uncased' and the 'Adam optimizer', but does not specify software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup Yes The batch size is 24 and the learning rate is 1 10 4 with cosine annealing decay. Images are resized into size 224 224, and the max length of text is set as 197. The hyper-parameters α and β are empirically set as α = 1, and β = 4.