Neighbor Contrastive Learning on Learnable Graph Augmentation

Authors: Xiao Shen, Dewang Sun, Shirui Pan, Xi Zhou, Laurence T. Yang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the benchmark datasets demonstrate that NCLA yields the state-of-the-art node classification performance on self-supervised GCL and even exceeds the supervised ones, when the labels are extremely limited.
Researcher Affiliation Academia Xiao Shen1, Dewang Sun1, Shirui Pan2, Xi Zhou1, Laurence T. Yang1,3 1 Hainan University 2 Griffith University 3 St. Francis Xavier University
Pseudocode Yes Algorithm 1: NCLA
Open Source Code Yes Our code is released at https://github.com/shenxiaocam/NCLA.
Open Datasets Yes Extensive experiments have been conducted on five benchmark datasets for semi-supervised node classification, including three widely-used citation networks, i.e., Cora, Citeseer, Pubmed (Sen et al. 2008), a co-authorship network, i.e., Coauthor-CS (Shchur et al. 2018), and a product co-purchase network, i.e., Amazon-Photo (Shchur et al. 2018).
Dataset Splits Yes For Cora, Citeseer and Pubmed, we followed (Yang, Cohen, and Salakhudinov 2016) to randomly select 20 nodes per class for training, 500 nodes for validation and the remaining nodes for test. For Coauthor CS and Amazon Photo, we followed (Liu, Gao, and Ji 2020) to randomly select 20 nodes per class for training, 30 nodes per class for validation, and the remaining nodes for test.
Hardware Specification No The paper states that NCLA was implemented in PyTorch and Deep Graph Library, but it does not specify any particular hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies Yes The proposed NCLA was implemented in Py Torch 1.10.1 (Paszke et al. 2019) and Deep Graph Library 0.6.1 (Wang et al. 2019).
Experiment Setup Yes The hyperparameters of NCLA on the five datasets are specified in Table 2.