Proximity Enhanced Graph Neural Networks with Channel Contrast

Authors: Wei Zhuo, Guang Tan

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on six assortative graphs and three disassortative graphs demonstrate the effectiveness of our approach.
Researcher Affiliation Academia Shenzhen Campus of Sun Yat-sen University, China zhuow5@mail2.sysu.edu.cn, tanguang@mail.sysu.edu.cn
Pseudocode Yes A The Pseudocode of PE-GCL The details of PE-GCL are described in Algorithm 1.
Open Source Code Yes The source code, hyper-parameter settings and complexity analysis are available at https://github.com/JhuoW/PE-GCL
Open Datasets Yes We conduct node classification experiments on the following commonly used six assortative graphs (Cora, Cite Seer, Pub Med, Wiki CS, ACM and Coauthor CS) [Kipf and Welling, 2017; Mernyei and Cangea, 2020; Sinha et al., 2015] and three disassortative graphs (Texas, Chameleon and Actor) [Pei et al., 2019].
Dataset Splits Yes For assortative graphs, we use ten splits of the nodes into 10%/10%/80% for train/validation/test nodes over 10 random seeds, and run 10 times for each split to report average accuracy with standard deviation.
Hardware Specification No The paper does not specify the hardware used for the experiments, such as GPU or CPU models, or any cloud computing resources.
Software Dependencies No The paper mentions using GAT as the encoder and MLP as the projection head, but it does not specify software versions for these components or any other libraries like PyTorch or TensorFlow.
Experiment Setup Yes In practice, kmax is a hyperparameter and kmax < 10 is found to work well. Sensitivity analysis. Figure 3(b) shows the impact of hidden layer sizes of the GNN encoder and the MLP on the classification results of PE-GCL on Cora. With the increase of dimensionalities, the performance increases first and starts to stabilize when the dimensionalities reaches 256.