Label Attentive Distillation for GNN-Based Graph Classification

Authors: Xiaobin Hong, Wenzhong Li, Chaoqun Wang, Mingkai Lin, Sanglu Lu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments with 7 GNN backbones based on 10 benchmark datasets show that LAD-GNN improves the SOTA GNNs in graph classification accuracy. We empirically evaluate our method in graph classification tasks on 10 benchmark datasets with 7 commonly used GNN backbones and execute performance comparisons with other 9 GNN training methods.
Researcher Affiliation Academia Xiaobin Hong1, Wenzhong Li1*, Chaoqun Wang2, Mingkai Lin1, Sanglu Lu1 1State Key Laboratory for Novel Software Technology, Nanjing University 2The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), China
Pseudocode Yes Algorithm 1: LAD-GNN Algorithm
Open Source Code Yes The source codes of LAD-GNN are publicly available on https: //github.com/Xiaobin Hong/LAD-GNN.
Open Datasets Yes We assess the graph classification performances on 10 open datasets, which include the Chemical Molecules datasets MUTAG, PTC (Debnath et al. 1991), and NCI1 (Wale, Watson, and Karypis 2008), the Bioinformatics graph datasets PROTEINS and ENZYMES (Borgwardt et al. 2005), and the Social Network datasets COLLAB, IMDA-BINARY, IMDB-MULTI, REDDIT-BINARY, and REDDIT-MULTI-5K (Yanardag and Vishwanathan 2015).
Dataset Splits Yes For a fair comparison, all datasets are randomly split to train/validation/test sets following the 0.8/0.1/0.1 protocol in each model.
Hardware Specification Yes We implement LAD-GNN in Py Torch v1.12, and the experiments are conducted on a GPU-equipped PC with an NVIDIA Ge Force RTX 3090Ti.
Software Dependencies Yes We implement LAD-GNN in Py Torch v1.12
Experiment Setup Yes We tune the values of λ from 0.001 to 1000 and τ from 0.1 to 1.0, and test the graph classification performance on 4 datasets (i.e., PTC, NCI109, PROTEINS, and IMDB-BINARY). The node aggregation layer numbers of them are all set to 2. We report the average and standard deviation of test accuracy across the ten folds within the cross-validation.