A General Black-box Adversarial Attack on Graph-based Fake News Detectors

Authors: Peican Zhu, Zechen Pan, Yang Liu, Jiwei Tian, Keke Tang, Zhen Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on empirical datasets demonstrate the effectiveness of GAFSI.
Researcher Affiliation Academia 1School of Artificial Intelligence, Optics and Electronics, Northwestern Polytechnical University 2School of Computer Science, Northwestern Polytechnical University 3Air Traffic Control and Navigation College, Air Force Engineering University 4Cyberspace Institute of Advanced Technology, Guangzhou University
Pseudocode No The paper describes its method in prose within the 'Method' section but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about making its source code publicly available or provide links to a code repository.
Open Datasets Yes We adopt two real-world datasets [Shu et al., 2017; Fey and Lenssen, 2019], i.e., Politifact and Gossipcop, from the Py Torch-Geometric library.
Dataset Splits Yes To train detectors and our surrogate model, we split the data into 20% for the training, 10% for the validation, and 70% for the testing.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU models, or memory used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch-Geometric library', 'Glove', and 'BERT' but does not specify version numbers for these software components or any other libraries.
Experiment Setup No The paper describes the general experimental settings, including dataset splits and types of GNN models used. However, it does not provide specific hyperparameters like learning rates, batch sizes, or optimizer settings for reproducibility.