Eliciting Structural and Semantic Global Knowledge in Unsupervised Graph Contrastive Learning

Authors: Kaize Ding, Yancheng Wang, Yingzhen Yang, Huan Liu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that the node representations learned by S3-CL achieve superior performance on different downstream tasks compared with the state-of-the-art unsupervised GCL methods.
Researcher Affiliation Academia Arizona State University School of Computing and Augmented Intelligence kaize.ding@asu.edu, yancheng.wang@asu.edu, yingzhen.yang@asu.edu, huan.liu@asu.edu
Pseudocode Yes Algorithm 1 outlines the learning process of the proposed framework.
Open Source Code Yes Implementation and more experimental details are publicly available at https://github.com/kaize0409/S-3-CL.
Open Datasets Yes In our experiments, we evaluate S3CL on six public benchmark datasets that are widely used for node representation learning, including Cora (Sen et al. 2008), Citeseer (Sen et al. 2008), Pubmed (Namata et al. 2012), Amazon-P (Shchur et al. 2018), Coauthor CS (Shchur et al. 2018) and ogbn-arxiv (Hu et al. 2020).
Dataset Splits Yes We follow the evaluation protocols in previous works (Veliˇckovi c et al. 2019; Hu et al. 2020) for node classification.
Hardware Specification No No specific hardware details such as GPU/CPU models, memory, or processor types used for running experiments are provided in the paper.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, such as Python or library versions.
Experiment Setup Yes To demonstrate the power of our approach in utilizing structural global knowledge, we compare S3-CL against GRACE, MVGRL, MERIT, and SUGRL with different numbers of layers L. The node clustering accuracy of different methods is shown in Figure 3. ... To validate the effectiveness of the structural contrastive learning and semantic contrastive learning in S3CL, we conduct an ablation study on Citesser, Cora, and Pubmed with two variants of S3-CL, each of which has one of the contrastive learning components removed.