Going Deeper into Permutation-Sensitive Graph Neural Networks
Authors: Zhongyu Huang, Yingheng Wang, Chaozhuo Li, Huiguang He
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on multiple synthetic and real-world datasets demonstrate the superiority of our model. In this section, we evaluate PG-GNN on multiple synthetic and real-world datasets from a wide range of domains. |
| Researcher Affiliation | Collaboration | 1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3Department of Electronic Engineering, Tsinghua University, Beijing, China 4Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA 5Microsoft Research Asia, Beijing, China 6Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China. |
| Pseudocode | No | The paper describes its methods using mathematical equations and prose but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is publicly available at https://github.com/zhongyu1998/PG-GNN. |
| Open Datasets | Yes | PROTEINS and NCI1 are bioinformatics datasets; IMDB-BINARY, IMDB-MULTI, and COLLAB are social network datasets. They are all popular graph classification tasks from the classical TUDataset (Morris et al., 2020). ... MNIST is a computer vision dataset for the graph classification task, and ZINC is a chemistry dataset for the graph regression task. They are both modern benchmark datasets, and we obtain the features from the original paper (Dwivedi et al., 2020) |
| Dataset Splits | Yes | For TUDataset, we follow the same data split and evaluation protocol as Xu et al. (2019). We perform 10-fold cross-validation with random splitting and report our results (the average and standard deviation of testing accuracies) at the epoch with the best average accuracy across the 10 folds. MNIST has 55,000 training, 5,000 validation, and 10,000 testing graphs, where the 5,000 graphs for the validation set are randomly sampled from the training set. ZINC has 10,000 training, 1,000 validation, and 1,000 testing graphs. |
| Hardware Specification | Yes | The experiments are conducted on Linux servers equipped with an Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 256GB RAM and 8 NVIDIA TITAN RTX GPUs. |
| Software Dependencies | Yes | All models are implemented using Python version 3.6, Network X version 2.4 (Hagberg et al., 2008), Py Torch version 1.4.0 (Paszke et al., 2019) with CUDA version 10.0.130, and cu DNN version 7.6.5. In addition, the benchmark datasets are loaded by Deep Graph Library (DGL) version 0.4.2 (Wang et al., 2019). |
| Experiment Setup | Yes | We report the hyper-parameters chosen by our model selection procedure as follows. For all tasks and datasets, 5 GNN layers (including the input layer) are applied, and the LSTMs with 2 layers are used as the aggregation functions. Batch normalization (Ioffe & Szegedy, 2015) is applied to every hidden layer. All models are initialized using Glorot initialization (Glorot & Bengio, 2010) and trained using the Adam SGD optimizer (Kingma & Ba, 2015) with an initial learning rate of 0.001. |