Fine-Tuning Graph Neural Networks by Preserving Graph Generative Patterns

Authors: Yifei Sun, Qi Zhu, Yang Yang, Chunping Wang, Tianyu Fan, Jiajun Zhu, Lei Chen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared with existing algorithms, G-TUNING demonstrates consistent performance improvement in 7 in-domain and 7 out-of-domain transfer learning experiments.5 Experiments In this section, we answer the following questions: Q1. (Effectiveness) Does G-TUNING improve the performance of fine-tuning? Q2. (Transferability) Can G-TUNING enable the better transferability than baselines? Q3. (Integrity) How does each component of G-TUNING contribute to the performance? Q4. (Efficiency) Can G-TUNING improve the performance of fine-tuning at an acceptable time consumption?
Researcher Affiliation Collaboration 1Zhejiang University, Hangzhou, China 2University of Illinois Urbana-Champaign, USA 3Fin Volution Group, Shanghai, China
Pseudocode Yes The overall learning process of G-TUNING can be found in Algorithm 1 from App A.3.
Open Source Code Yes 1Supplement materials: https://github.com/zjunet/G-Tuning
Open Datasets Yes we pre-train GIN (Xu et al. 2019) by self-supervised Context Prediction task on the ZINC15 dataset with 2 million unlabeled molecules (Sterling and Irwin 2015). Next, we perform fine-tuning of the backbone model on 7 binary classification datasets obtained from Molecule Net (Wu et al. 2018).We pre-train on 7 different datasets ranging from academia to social domains, and evaluate our approach on 7 downstream graph classification benchmarks: IMDB-M, IMDB-B, MUTAG, PROTEINS, ENZYMES, MSRC_21 and RDT-M12K from the TUDataset (Morris et al. 2020).
Dataset Splits Yes We use the scaffold split at an 8:1:1 ratio.We report the results under 10-fold cross-validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions general running time without hardware context.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9, etc.).
Experiment Setup Yes In G-TUNING, we have two major hyper-parameters: the number of learnable bases C and graphon size M.Fig 4 shows that the performance increases as the number of bases grows from 2 to 32.