Gapformer: Graph Transformer with Graph Pooling for Node Classification

Authors: Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, Wenbin Hu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 13 node classification datasets, including homophilic and heterophilic graph datasets, demonstrate the competitive performance of Gapformer over existing Graph Neural Networks and GTs.
Researcher Affiliation Collaboration Chuang Liu1 , Yibing Zhan2 , Xueqi Ma3 , Liang Ding2 , Dapeng Tao4,5 , Jia Wu6 and Wenbin Hu1 1School of Computer Science, Wuhan University, Wuhan, China 2JD Explore Academy, JD.com, China 3School of Computing and Information Systems, The University of Melbourne, Melbourne, Australia 4School of Computer Science, Yunnan University, Kunming, China 5Yunnan Key Laboratory of Media Convergence, Kunming, China 6School of Computing, Macquarie University, Sydney, Australia
Pseudocode No The paper describes its methodology using mathematical equations and descriptions but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a concrete link or explicit statement about the availability of its own open-source code for the described methodology.
Open Datasets Yes All the adopted graph datasets, except ogbn-arxiv, can be downloaded from PyTorch Geometric (PyG) [Fey and Lenssen, 2019] 1, and obgn-arxiv can be downloaded from Open Graph Benchmark (OGB) 2.
Dataset Splits Yes Specifically, for Cora, Citeseer, and Pubmed datasets we follow the (48%/32%/20%) split as proposed in [Pei et al., 2020]. The same splits used by [Zhu et al., 2020] and [Liu et al., 2022a] are adopted for the four heterophilic graph datasets. For all other datasets, we randomly split them into 60%/20%/20% training/validation/test sets following [Zhang et al., 2022].
Hardware Specification Yes All experiments are conducted on a Linux server with two NVIDIA A100s.
Software Dependencies Yes Our implementation of Gapformer is developed using Python (3.7.0), Pytorch (1.11.0), and Pytorch Geometric (2.2.0).
Experiment Setup Yes For ease of tuning work, we set some hyperparameters: dropout at 0.5, weight decay at 5e-4, position encoding dimension at 20, and hidden dimension within {64, 128, 256}.