Graph Neural Networks with Learnable and Optimal Polynomial Bases

Authors: Yuhe Guo, Zhewei Wei

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct a series of comprehensive experiments to demonstrate the effectiveness of the proposed methods. Experiments consist of node classification tasks on small and large graphs, the learning of multi-channel filters, and a comparison of Favard GNN and Opt Basis GNN.
Researcher Affiliation Academia 1Gaoling School of Artificial Intelligence, Renmin University of China 2Peng Cheng Laboratory 3Beijing Key Laboratory of Big Data Management and Analysis Methods 4MOE Key Lab of Data Engineering and Knowledge Engineering. Correspondence to: Zhewei Wei <zhewei@ruc.edu.cn>.
Pseudocode Yes Algorithm 1: FAVARDFILTERING, Algorithm 2: FAVARDGNN (For Classification), Algorithm 3: (An Unreachable Algorithm for Utilizing Optimal Basis), Algorithm 4: OPTBASISFILTERING, Algorithm 5: OBTAINNEXTBASISVECTOR, Algorithm 6: OPTBASISFILTERING, Algorithm 7: Favard GNN.Pytorch style., Algorithm 8: Opt Basis GNN.Pytorch style.
Open Source Code Yes Our code is available at https://github.com/yuzi Guo/Far Opt Basis.
Open Datasets Yes We include medium-sized graph datasets conventionally used in preceding graph filtering works, including three heterophilic datasets (Chameleon, Squirrel, Actor) provided by Pei et al. (2020) and two citation datasets (Pub Med, Citeseer) provided by Yang et al. (2016) and Sen et al. (2008). We perform node classification tasks on two large citation networks: ogbn-arxiv and ogbn-papers100M (Hu et al., 2020), and five large non-homophilic networks from the LINKX datasets (Lim et al., 2021).
Dataset Splits Yes For all these graphs, we take a 60%/20%/20% train/validation/test split proportion following former works, e.g. Chien et al. (2021). For ogbn datasets, we run repeating experiments on the given split with ten random model seeds... We choose hyperparameters on the validation sets.
Hardware Specification No The paper does not specify any hardware details such as GPU/CPU models or memory used for experiments.
Software Dependencies No The paper mentions 'Py Torch-styled pseudocode' and the use of 'Adam optimizer' but does not specify version numbers for PyTorch, Adam, or any other software dependencies.
Experiment Setup Yes The hidden size of the first MLP layers h is set to be 64, which is also the number of filter channels. For the scaled-up Opt Basis GNN, we drop the first MLP layer to fix the basis vectors needed for precomputing, and following the scaled-up version of Cheb Net II (He et al., 2022), we add a three-layer MLP with weight matrices of shape F h, h h and h c after the filtering process. For the optimization process on the training sets, we tune all the parameters with Adam (Kingma & Ba, 2015) optimizer. We use early stopping with a patience of 300 epochs. We choose hyperparameters on the validation sets. To accelerate hyperparameter choosing, we use Optuna(Akiba et al., 2019) to select hyperparameters from the range below with a maximum of 100 complete trials: 1. Truncated Order polynomial series: K {2, 4, 8, 12, 16, 20}; 2. Learning rates: {0.0005, 0.001, 0.005, 0.1, 0.2, 0.3, 0.4, 0.5}; 3. Weight decays: {1e 8, , 1e 3}; 4. Dropout rates: {0., 0.1, , 0.9};