USER: Unsupervised Structural Entropy-Based Robust Graph Neural Network
Authors: Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on clustering and link prediction tasks under random-perturbation and meta-attack over three datasets show that USER outperforms benchmarks and is robust to heavier perturbations. We conducted a series of experiments to validate the effectiveness of our USER framework. |
| Researcher Affiliation | Academia | 1 School of Computer Science, The University of Auckland, New Zealand {wany107, ywan980, zzha669, syan382}@aucklanduni.ac.nz, {kaiqi.zhao, jiamou.liu}@auckland.ac.nz |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Full proof, experimental details, and code of our work are available at https://github.com/wangyifeibeijing/USER. |
| Open Datasets | Yes | We evaluate all models on three benchmark datasets: Cora, Citeseer, Wiki (Kipf and Welling 2017; Yang et al. 2015; Jin et al. 2020b). They have different structures and node features. Dataset statistics are in Table 1. |
| Dataset Splits | No | The paper mentions evaluating models on datasets and discusses 'Ptb' (perturbation ratio) but does not specify explicit train/validation/test splits (e.g., 80/10/10) or cross-validation methodology. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud computing specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not specify any programming languages, libraries, or other software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed for replication. |
| Experiment Setup | Yes | We train USER for 400 epochs using Adam optimizer with a learning rate η. The two hyperparameters α and β, are selected through a grid search. A detailed analysis could be found in Parameter Analysis. The dimension d(1), learning rate η, α and β are selected accordingly based on the parameter analysis. |