Unsupervised Adversarially Robust Representation Learning on Graphs
Authors: Jiarong Xu, Yang Yang, Junru Chen, Xin Jiang, Chunping Wang, Jiangang Lu, Yizhou Sun4290-4298
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments In the experiments, we train our model in a fully unsupervised manner, and then apply the output representations to three graph learning tasks. Compared with non-robust and other robust graph representation models, the proposed model produces more robust representations to defend adversarial attacks. Furthermore, the superiority of our model still holds under different strengths of attacks and under various attack strategies. |
| Researcher Affiliation | Collaboration | Jiarong Xu1, Yang Yang1 , Junru Chen1, Xin Jiang3, Chunping Wang2, Jiangang Lu1, Yizhou Sun3 1Zhejiang University 2Fin Volution Group 3University of California, Los Angeles |
| Pseudocode | No | The paper describes various methods and algorithms (e.g., 'we adopt a projected gradient descent topology attack') but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at: https://github.com/galina0217/robustgraph. |
| Open Datasets | Yes | For evaluation, we use three datasets, Cora, Citeseer and Polblogs, and compare our model with the following baselines. |
| Dataset Splits | No | The paper does not explicitly provide details about a validation dataset split (e.g., percentages, sample counts, or specific pre-defined splits for validation). |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU, CPU models, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper refers to deep learning models and GNN architectures but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In the training phase, we adopt the projected gradient descent topology attack (Xu et al. 2019a) and PGD attack (Madry et al. 2018) to construct adversarial examples of A and X, respectively. We set γ= 5e-3, δ= 0.4|E|, and ε= 0.1. |