Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs

Authors: Jin Li, Qirong Zhang, Shuling Xu, Xinlong Chen, Longkun Guo, Yang-Geng Fu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments are carried out to demonstrate the effectiveness and potential of the proposed model and learning framework through comparison with twelve existing baselines including the state-of-the-art methods on twelve real-world node classification benchmarks.
Researcher Affiliation Academia 1College of Computer and Data Science, Fuzhou University, Fuzhou, China 2AI Thrust, Information Hub, HKUST (Guangzhou), Guangzhou, China 3Shandong Provincial Key Laboratory of Computer Networks, Shandong Fundamental Research Center for Computer Science, Jinan, China
Pseudocode Yes A GCN-based implementation of the whole structure is summarized in Algorithm 1 in supplementary materials with a line-by-line description therein.
Open Source Code Yes The complete version of this paper can be found at https://arxiv.org/abs/2312.08221 with all codes at https://github.com/jslijin/Research-Paper-Codes/.
Open Datasets Yes For a fair comparison, twelve real-world public benchmarks are chosen, including two kinds: 1) eight graphs with homophily: four widely used scientific citation networks (i.e., Core, Citeseer, Pubmed (Sen et al. 2008), and a large-scale graph OGBN-Ar Xiv (Hu et al. 2020)), scientific co-authorship networks Physics and CS (Mernyei and Cangea 2020), as well as Amazon purchasing system Computers and Photo (Shchur et al. 2018); 2) four graphs with heterophily: webpage datasets Texas, Wisconsin, and Cornell (Pei et al. 2020) as well as an actor co-occurrence network Actor (Tang et al. 2009).
Dataset Splits Yes Their statistics and adopted splits are summarized in Tab. 4 in supplementary materials. We adopt the standard semi-supervised training/validation/testing splits for them following prior works (Kipf and Welling 2017; Chen et al. 2020, 2022).
Hardware Specification Yes They are performed on an Ubuntu system with a single Ge Force RTX 2080Ti GPU (12GB Memory) and 40 Intel(R) Xeon(R) Silver 4210 CPUs.
Software Dependencies No And the proposed model is implemented by Pytorch (Paszke et al. 2019) and optimized with Adam Optimizer. The paper mentions PyTorch and Adam Optimizer but does not provide specific version numbers for them.
Experiment Setup No Due to space limitations, some experimental details are given in supplementary materials including dataset descriptions, implementations, omitted results (e.g., with other layers, with different splits, comparisons with more baselines on heterophilous graphs, as well as standard errors), hyper-parameters (searching spaces and specific configurations), and some more visualizations.