A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models

Authors: Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, Junzhou Huang3389-3396

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results validate the effectiveness of our attacker on several benchmark datasets. Experiments
Researcher Affiliation Collaboration 1Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, China 2Tencent AI Lab, China 3Department of Computer Science and Technology, Tsinghua University, China
Pseudocode Yes Algorithm 1 Graph Filter Attack (GF-Attack) adversarial attack algorithm under RBA setting
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We evaluate our approach on three real-world datasets: Cora, Citeseer and Pubmed. In all three citation network datasets, vertices are documents with corresponding bag-of-words features and edges are citation links. The data preprocessing settings are closely followed the benchmark setup in (Kipf and Welling 2017).
Dataset Splits Yes Following the setting in (Z ugner, Akbarnejad, and G unnemann 2018), we split the graph into labeled (20%) and unlabeled vertices (80%). Further, the labeled vertices are splitted into equal parts for training and validation.
Hardware Specification No The paper mentions 'running time (s) comparison' but does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or scikit-learn along with their exact versions).
Experiment Setup Yes For Deep Walk, the hyperparameters are set to commonly used values: window size as 5, number of negative sampling in skip-gram as 5 and top-128 largest singular values/vectors. A logistic regression classifier is connected to the output embeddings of sampling-based methods for classification. Unless otherwise stated, all Graph Convolutional Networks contain two layers. Attack Configuration. A small budget β is applied to regulate all the attackers. To make this attacking task more challenging, β is set to 1. Specifically, the attacker is limited to only add/delete a single edge given a target vertex t. For our method, we set the parameter T in our general attack model as n T = 128, which means that we choose the top-T smallest eigenvalues for T-rank approximation in embedding quality measure. Unless otherwise indicated, the order of graph filter in GF-Attack model is set to K = 2.