Graph Cross Networks with Vertex Infomax Pooling

Authors: Maosen Li, Siheng Chen, Ya Zhang, Ivor Tsang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed GXN improves the classification accuracy by 2.12% and 1.15% on average for graph classification and vertex classification, respectively. 5 Experimental Results
Researcher Affiliation Collaboration Maosen Li Shanghai Jiao Tong University maosen_li@sjtu.edu.cn Siheng Chen B Shanghai Jiao Tong University sihengc@sjtu.edu.cn Ya Zhang B Shanghai Jiao Tong University ya_zhang@sjtu.edu.cn Ivor Tsang Australian Artificial Intelligence Institute University of Technology Sydney Ivor.Tsang@uts.edu.au * This work was done while Siheng Chen was working at Mitsubishi Electric Research Laboratories (MERL).
Pseudocode No The paper describes the methodology in text and uses diagrams (e.g., Figure 3) but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes 1 The code could be downloaded at https://github.com/limaosen0/GXN
Open Datasets Yes For graph classification, we use social network datasets: IMDBB, IMDB-M and COLLAB [52], and bioinformatic datasets: D&D [17], PROTEINS [21], and ENZYMES [4]. ... For vertex classification, we use three classical citation networks: Cora, Citeseer and Pubmed [32].
Dataset Splits Yes We use the same dataset separation as in [23], perform 10-fold cross-validation, and show the average accuracy for evaluation. For vertex classification, we use three classical citation networks: Cora, Citeseer and Pubmed [32]. We perform both full-supervised and semi-supervised vertex classification; that is, for full-supervised classification, we label all the vertices in training sets for model training, while for semi-supervised, we only label a few vertices (around 7% on average) in training sets. We use the default separations of training/validation/test subsets.
Hardware Specification Yes We implement GXN with Py Torch 1.1 on one GTX-1080Ti GPU.
Software Dependencies Yes We implement GXN with Py Torch 1.1 on one GTX-1080Ti GPU. ... We use Adam optimizer [16]
Experiment Setup Yes For graph classification, we consider three scales, which preserve 50% to 100% vertices from the original scales, respectively. For both input and readout layers, we use 1-layer GCNs; for multiscale feature extraction, we use two GCN layers followed by ReLUs at each scale and feature-crossing layers between any two consecutive scales at any layers. ... In the VIPool, we use a 2-layer MLP and R-layer GCN (R = 1 or 2) as Ew( ) and Pw( ), and use a linear layer as Sw( , ). The hidden dimensions are 48. ... For vertex classification, we use similar architecture as in graph classification, while the hidden feature are 128-dimension. ... In the loss function L, α decays from 2 to 0 during training, ... We use Adam optimizer [16] and the learining rates range from 0.0001 to 0.001 for different datasets.