Local-Global Defense against Unsupervised Adversarial Attacks on Graphs

Authors: Di Jin, Bingdao Feng, Siqi Guo, Xiaobao Wang, Jianguo Wei, Zhen Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our strategies can enhance the robustness of representation against various adversarial attacks on three benchmark graphs.
Researcher Affiliation Academia Di Jin1, Bingdao Feng1, Siqi Guo1, Xiaobao Wang1*, Jianguo Wei1, Zhen Wang2 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Cybersecurity, Northwestern Polytechnical University, Xi an, Shaanxi, China
Pseudocode Yes Algorithm 1: Optimization algorithm
Open Source Code No The paper does not provide any explicit statement or link regarding the public availability of its source code.
Open Datasets Yes We use three real-world datasets in our experiments, i.e., Cora, Citeseer (Sen et al. 2008) and Polblogs (Adamic and Glance 2005).
Dataset Splits Yes For Cora and Citeseer, randomly assign them to the training, verification, and test sets in the ratio of 1:1:8.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions various models and frameworks (e.g., GCN, GAT, DGI) but does not specify version numbers for any software dependencies.
Experiment Setup Yes For our model, we set the parameters h = 0.2, α = 1 and β = 0.4. At the stage of evaluating, we consider both the performance and the robustness of the model, so we employ the four attack methods indicated above, and set different perturbation ratios with a step of 10%. All comparative learning baselines use a two-layer GCN as the encoder and use the default setting.