A Self-Supervised Mixed-Curvature Graph Neural Network
Authors: Li Sun, Zhongbao Zhang, Junda Ye, Hao Peng, Jiawei Zhang, Sen Su, Philip S Yu4146-4155
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate SELFMGNN with the link prediction and node classification tasks against 10 strong baselines on 5 benchmark datasets. We report the mean with the standard deviations of 10 independent runs for each model to achieve fair comparisons. |
| Researcher Affiliation | Academia | 1School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China 2School of Computer Science, Beijing University of Posts and Telecommunications, China 3Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing 100191, China 4IFM Lab, Department of Computer Science, University of California, Davis, CA, USA 5Department of Computer Science, University of Illinois at Chicago, IL, USA |
| Pseudocode | Yes | Algorithm 1: Self-supervised Learning SELFMGNN |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code or a direct link to a code repository for the SELFMGNN model. |
| Open Datasets | Yes | Datasets: We utilize 5 benchmark datasets, i.e., the widely-used Citeseer, Cora, and Pubmed (Kipf and Welling 2017; Veliˇckovi c et al. 2019), and the latest Amazon and Airport (Zhang et al. 2021). |
| Dataset Splits | Yes | For all the comparison model, we perform a hyperparameter search on a validation set to obtain the best results, and the κ-GCN is trained with positive curvature in particular to evaluate the representation ability of the spherical space. Supervised models were trained and tested by following Chami et al. (2019). |
| Hardware Specification | No | The paper mentions 'computing infrastructure provided by Huawei Mind Spore platform' but does not provide specific hardware details such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions software like 'Python' and the 'Mind Spore platform' but does not specify version numbers for any key software components or libraries. |
| Experiment Setup | Yes | In SELFMGNN, we stack the attentive aggregation layer twice to learn the component embedding. We employ a two-layer MLPκ in the Riemannian projector to reveal the Riemannian views for the self-supervised learning... In the experiments, we set the weight γ to be 1... The grid search is performed over the learning rate in [0.001, 0.003, 0.005, 0.008, 0.01] as well as the dropout probability in [0, 0.8] with the step size of 0.1. |