Feature Expansion for Graph Neural Networks
Authors: Jiaqi Sun, Lin Zhang, Guangyi Chen, Peng Xu, Kun Zhang, Yujiu Yang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to the baseline, and demonstrate its efficient convergence capability. Extensive experiments are performed on both homophilic and heterophilic datasets, and our proposal achieves significant improvements, e.g. an average accuracy increase of 32% on heterophilic graphs. |
| Researcher Affiliation | Collaboration | 1Shenzhen International Graduate School, Tsinghua University, China 2International Digital Economy Academy, China 3Carnegie Mellon University, USA 4Mohamed bin Zayed University of Artificial Intelligence, UAE 5Chinese University of Hong Kong, China. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/sajqavril/Feature-Extension-Graph-Neural-Networks.git |
| Open Datasets | Yes | Cora, Cite Seer, and Pub Med are commonly used homophilic citation networks (Yang et al., 2016). Computers and Photo are homophilic co-bought networks from Amazon (Shchur et al., 2018). For heterophilic datasets, we utilize hyperlinked networks Squirrel and Chameleon from (Pei et al., 2020), and Actor, a subgraph from the film-director-actor network (Rozemberczki et al., 2021). Py G are employed to get these data. |
| Dataset Splits | Yes | Each datasets are split into three parts using random selection: 60% as the training set, 20% as the validation set, and 20% as the test set. |
| Hardware Specification | Yes | We conduct all the experiments on the machine with NVIDIA 3090 GPU (24G) and Intel(R) Xeon(R) Platinum 8260L CPU @ 2.30GHz. |
| Software Dependencies | No | The paper mentions using PyG and Adam for optimization, but does not specify version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For FE-GNN, we turn the following hyper-parameters by the grid search. Learning rate: {0.01, 0.05, 0.1} Weight decay: {0.0005, 0.001, 0.005, 0.01, 0.02, 0.05} |S| for homophilic graphs: {0, 10, 50, 100, 200, 500, 1000, 2000} |S| for heterophilic graphs: {500, 600, 700, 800, 900, 1000, 1500, 2000} Suggested |S|: the whole hundred from the 94% singular values Hidden size: 64 Ranks k of the polynomial Pk(ˆL): {0, 1, 2, 3}. We employ Adam for optimization and set the early stopping criteria as a warmup of 50 pluses patience of 200 for a maximum of 100 epochs. |