Less is More: on the Over-Globalizing Problem in Graph Transformers

Authors: Yujie Xing, Xiao Wang, Yibo Li, Hai Huang, Chuan Shi

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various graphs well validate the effectiveness of our proposed Co BFormer. Table 2 reports the experimental results on node classification.
Researcher Affiliation Academia 1School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China 2School of Software, Beihang University, Beijing, China.
Pseudocode No The paper describes the proposed method verbally and mathematically but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code is available for reproducibility at: https://github.com/null-xyj/Co BFormer.
Open Datasets Yes We select seven datasets to evaluate, including homophilic graphs, i.e., Cora, Cite Seer, Pubmed (Yang et al., 2016), Ogbn-Arxiv, Ogbn-Products (Hu et al., 2020) and heterophilic graphs, i.e., Actor, Deezer (Lim et al., 2021b).
Dataset Splits Yes For Actor and Deezer, we perform five random splits of the nodes into train/valid/test sets, with the ratio of 50%:25%:25% (Lim et al., 2021b).
Hardware Specification No The paper mentions 'GPU memory' but does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for experiments.
Software Dependencies No The paper mentions software like Py G and Gamma GL but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The hyperparameters are selected through grid search within the following search space: learning rate within {5e-4, 1e-3, 5e-3, 1e-2, 5e-2}. GCN layers within {2, 3}. weight decay of GCN within {1e-4, 5e-4, 1e-3, 5e-3, 1e-2}. weight decay of BGA within {1e-5, 5e-5, 1e-4, 5e-4, 1e-3}.