Gradformer: Graph Transformer with Exponential Decay
Authors: Chuang Liu, Zelin Yao, Yibing Zhan, Xueqi Ma, Shirui Pan, Wenbin Hu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various benchmarks demonstrate that Gradformer consistently outperforms the Graph Neural Network and GT baseline models in various graph classification and regression tasks. Additionally, Gradformer has proven to be an effective method for training deep GT models, maintaining or even enhancing accuracy compared to shallow models as the network deepens, in contrast to the significant accuracy drop observed in other GT models. Codes are available at https://github.com/Liu Chuang0059/Gradformer. |
| Researcher Affiliation | Collaboration | Chuang Liu1 , Zelin Yao1 , Yibing Zhan2 , Xueqi Ma3 , Shirui Pan4 , Wenbin Hu1 1School of Computer Science, Wuhan University, Wuhan, China 2JD Explore Academy, JD.com, China 3School of Computing and Information Systems, The University of Melbourne, Melbourne, Australia 4School of Information and Communication Technology, Griffith University, Brisbane, Australia |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at https://github.com/Liu Chuang0059/Gradformer. |
| Open Datasets | Yes | We utilize nine commonly-used real-world datasets from various sources to ensure diversity, including five graph datasets from the TU database [Morris et al., 2020] (i.e., NCI1, PROTEINS, MUTAG, IMDB-B, and COLLAB), three datasets from Benchmarking GNN [Dwivedi et al., 2023] (i.e., PATTERN, CLUSTER, and ZINC), and one dataset from OGB [Hu et al., 2020] (i.e., OGBG-MOLHIV), involving diverse domains (e.g., synthetic, social, biology, and chemistry), sizes (e.g., ZINC and OGBG-MOLHIV are large datasets), and tasks (e.g., node classification, graph classification and regression). |
| Dataset Splits | Yes | For all datasets, we strictly follow the evaluation metrics and dataset split recommended by the given benchmarks [Ying et al., 2021]. Accordingly, we report the average test accuracy/AUROC/MAE based on the epoch when the best validation accuracy/AUROC/MAE is achieved. |
| Hardware Specification | Yes | Furthermore, all the experiments are conducted on a server equipped with 8 NVIDIA A100s. |
| Software Dependencies | No | The paper mentions that |
| Experiment Setup | No | The paper mentions running 10 trials with different random seeds and utilizing recommended settings for baselines ( |