Multi-Channel Pooling Graph Neural Networks
Authors: Jinlong Du, Senzhang Wang, Hao Miao, Jiaqiang Zhang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six benchmark datasets present the superior performance of Much Pool. The results show that our proposal achieves significant performance improvement on graph classification compared with current state-of-the-art models. |
| Researcher Affiliation | Academia | Jinlong Du1 , Senzhang Wang2 , Hao Miao1 and Jiaqiang Zhang1 1 Nanjing University of Aeronautics and Astronautics, Nanjing, China 2Central South University, Changsha, China {kingloon, haomiao, zhangjq}@nuaa.edu.cn , szwang@csu.edu.cn |
| Pseudocode | No | The paper describes algorithms and methods using prose and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code of this work is publicly available at Github1. 1https://github.com/kingloon/Multi Channel Pooling |
| Open Datasets | Yes | We use the following 6 widely used datasets in the classification tasks to evaluate the performance of our proposed model. D&D and PROTEINS [Dobson and Doig, 2003] are two protein graph datasets... NCI1 and NCI109 [Shervashidze et al., 2011] are two biological datasets... COLLAB [Leskovec et al., 2005] is a scientific collaboration dataset... |
| Dataset Splits | Yes | We follow the experiment setting of the state-of-the-art model Graph U-Nets [Gao and Ji, 2019] and evaluate our model over 20 random seeds using 10-fold cross validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running its experiments. |
| Software Dependencies | No | We implement our Much Pool model with the Py Torch framework. The paper mentions PyTorch but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The dimensions of node representation and graph representation are set as 64. The node retention ratio r is set as 0.5 for three channels in all layers. ... A MLP consisting of two fully connected layers with 128 neurons is set to follow the final Much Pool GCN layer... We use Xavier normal distribution [Glorot and Bengio, 2010] for weight initialization, Adam optimizer to initialize our model and negative log-likelihood loss function is utilized to train our model. For all datasets, we train our model for 300 epochs and the batch size is set to 16 or 32 (depending on the graph size). |