Finding the Missing-half: Graph Complementary Learning for Homophily-prone and Heterophily-prone Graphs

Authors: Yizhen Zheng, He Zhang, Vincent Lee, Yu Zheng, Xiao Wang, Shirui Pan

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiment results show that GOAL consistently outperforms all baselines in eight real-world datasets. For experiments, we first conduct node classification tasks to evaluate the effectiveness of GOAL on eight real-world datasets. Then we conduct ablation studies of GOAL.
Researcher Affiliation Academia 1Monash University 2La Trobe University 3Beihang University 4Griffith University.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/zyzisastudyreallyhardguy/GOAL-Graph-Complementary-Learning
Open Datasets Yes The adopted datasets include three citation networks (i.e., Cora, Citeseer, and Pubmed) (Yang et al., 2016), two Amazon co-purchasing networks (i.e., Computers and Photo) (Shchur et al., 2018), two Wikipedia graphs (i.e., Chameleon and Squirrel) (Rozemberczki et al., 2021) and the Actor co-occurrence graph (Pei et al., 2020).
Dataset Splits Yes For dataset split, we randomly split the node set of a dataset according to the ratio 60%/20%/20% for training, validation and test set respectively.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper discusses various methods and models like GNNs, GAT, MLP, and List Net loss, but it does not specify any software dependencies with their version numbers (e.g., specific Python, PyTorch, or TensorFlow versions) that would be needed for replication.
Experiment Setup Yes For dataset split, we randomly split the node set of a dataset according to the ratio 60%/20%/20% for training, validation and test set respectively. We randomly generate the split 10 times and run each baseline and GOAL. Our proposed Complemented Graph Convolution method may be hard to tune, as it has four hyper-parameters: α, β, γ, and δ to control the weight of each convolution component in Equation 8.