Blindfolded Attackers Still Threatening: Strict Black-Box Adversarial Attacks on Graphs

Authors: Jiarong Xu, Yizhou Sun, Xin Jiang, Yanhao Wang, Chunping Wang, Jiangang Lu, Yang Yang4299-4307

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that, even with no exposure to the model, the Macro-F1 drops 5.5% in node classification and 29.5% in graph classification, which is a significant result compared with existent works.
Researcher Affiliation Collaboration 1Zhejiang University 2University of California, Los Angeles 3Fin Volution Group
Pseudocode No The paper describes the attack strategy in steps (i), (ii), (iii) in Section 3.3, but it is presented as a descriptive paragraph rather than a formally structured pseudocode block or algorithm.
Open Source Code Yes Our codes are available at: https: //github.com/galina0217/stack.
Open Datasets Yes For node-level attacks, we adopt Cora-ML, Citeseer and Polblogs for node classification task, and follow the preprocessing in (Dai et al. 2018).
Dataset Splits Yes We set the training/validation/test split ratio as 0.1:0.1:0.8
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation or experimentation.
Experiment Setup Yes We set the spatial coefficient k = 1 and the restart threshold τ = 0.03. The candidate set of adversarial edges is randomly sampled in every trial, and its size is set as 20K.