Multi-Channel Graph Neural Networks
Authors: Kaixiong Zhou, Qingquan Song, Xiao Huang, Daochen Zha, Na Zou, Xia Hu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real-world datasets demonstrate the superiority of Much GNN over the state-of-the-art methods. |
| Researcher Affiliation | Academia | Kaixiong Zhou1 , Qingquan Song1 , Xiao Huang2 , Daochen Zha1 , Na Zou3 and Xia Hu1 1Department of Computer Science and Engineering, Texas A&M University 2Department of Computing, The Hong Kong Polytechnic University 3Department of Industrial and Systems Engineering, Texas A&M University |
| Pseudocode | No | The paper does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide a direct link or explicit statement about the availability of the source code for the methodology described. |
| Open Datasets | Yes | We use 7 graph classification benchmarks as suggested in [Yanardag and Vishwanathan, 2015], including 3 bioinformatic datasets (PTC, DD, PROTEINS [Borgwardt et al., 2005]) and 4 social network datasets (COLLAB, IMDBBINARY, IMDB-MULTI, REDDIT-MULTI-12K [D Dobson and Doig, 2003]). |
| Dataset Splits | Yes | We evaluate Much GNN with 10-fold cross validation, at which the average classification accuracy and standard deviation are reported. The model is trained with total of 100 epochs on each fold. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions "Adam optimizer is adopted to train Much GNN" but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | Considering the intra-channel and inter-channel graph convolutions as shown in Equations (3) and (5), we apply K = 3 for the message passing step, and d = 64 for the hidden dimension. The Global Pool function is given by maximization pooling to read out the graph representation. Batch normalization [Ioffe and Szegedy, 2015] and l2 normalization are applied after each step of graph convolutions to make the training more stable. We regularize the objective function by the entropy of cluster matrix to make the cluster pooling more sparse [Ying et al., 2018]. Adam optimizer is adopted to train Much GNN, and the gradient is clipped when its norm exceeds 2.0. We evaluate Much GNN with 10-fold cross validation, at which the average classification accuracy and standard deviation are reported. The model is trained with total of 100 epochs on each fold. |