HeterGCL: Graph Contrastive Learning Framework on Heterophilic Graph

Authors: Chenhao Wang, Yong Liu, Yan Yang, Wei Li

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on homophilic and heterophilic graphs demonstrate that Heter GCL outperforms existing selfsupervised and semi-supervised baselines across various downstream tasks.
Researcher Affiliation Collaboration Chenhao Wang1 , Yong Liu1 , Yan Yang1 and Wei Li2 1Heilongjiang University 2Harbin Engineering University
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our source code is available at https://github.com/Incendio1/Heter GCL.
Open Datasets Yes We evaluated the performance of Heter GCL and existing methods on seven representative homophilic or heterophilic datasets. Specifically, for the heterophilic datasets, we select three webpage datasets, Cornell, Texas, Wisconsin and an actor co-occurrence network, Actor [Pei et al., 2020]. For the homophilic dataset, we select the widely used standard citation network datasets Cora, Citeseer, and Pubmed [Sen et al., 2008]
Dataset Splits Yes Each dataset is randomly split into training/validation/test sets with 10%/10%/80%.
Hardware Specification Yes The experiments are conducted on a single NVIDIA Ge Force RTX 3090 machine.
Software Dependencies No The paper mentions the use of an Adam optimizer and MLP, but does not provide specific version numbers for any software libraries, programming languages, or other dependencies.
Experiment Setup Yes In addition, we perform a grid search to tune the hyperparameters and use the Adam optimizer to select the learning rate to train the model from {5e 3, 6e 3, 2e 2}. For Heter GCL, we search for λ and α in steps of 0.01 from 0 to 5. The dropout rate is searched from {0, 0.1, 0.5}. The weight decay is adjusted from {5e 4, 5e 3, 3e 3}. L is searched in steps 1 from 1 to 10.