GLASS: GNN with Labeling Tricks for Subgraph Representation Learning

Authors: Xiyuan Wang, Muhan Zhang

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on eight benchmark datasets show that GLASS outperforms the strongest baseline by 14.8% on average. And ablation analysis shows that our max-zero-one labeling trick can boost the performance of a plain GNN by up to 105% in maximum, which illustrates the effectiveness of labeling trick on subgraph tasks. Furthermore, training a GLASS model only takes 37% time needed for a Sub GNN on average.
Researcher Affiliation Academia 1Institute for Artificial Intelligence, Peking University 2Beijing Institute for General Artificial Intelligence {wangxiyuan,muhan}@pku.edu.cn
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Xi-yuan Wang/GLASS.
Open Datasets Yes We use four synthetic datasets: density, cut ratio, coreness, component, and four real-world subgraph datasets, namely ppi-bp, em-user, hpo-metab, hpo-neuro. The four synthetic datasets are introduced by Alsentzer et al. (2020)... The four real-world datasets are also provided by Alsentzer et al. (2020).
Dataset Splits Yes As for dataset division, the real-world datasets take an 80:10:10 split, and the synthetic datasets follow a 50:25:25 split, following (Alsentzer et al., 2020).
Hardware Specification Yes Models were trained on an Nvidia V100 GPU to measure the train time and were tested on an Nvidia A40 GPU on a Linux server.
Software Dependencies No The paper mentions using 'Pytorch Geometric and Pytorch' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Fixed hyperparameters were batch size = 131072, learning rate = 1e 3, hidden dimension = 64. Dropout is selected from [0.0, 0.5] and l ranges from 1 to 5. ... For GLASS, we select the learning rate from {1e 4, 2e 4, 5e 4, 1e 3, 2e 3, 5e 3}; number of layers from {1, 2}; hidden dimension, 64 for real-world datasets and {1, 5, 9, 13, 17}; dropout, 0.5 for real-world datasets and {0.1, 0.2, 0.3} for synthetic datasets; aggregation, {mean, sum, gcn}; pool, {mean, sum, max, size}; batch size, {ns/80, ns/40, ns/20, ns/10}, where ns is the size of datasets.