Universal Prompt Tuning for Graph Neural Networks

Authors: Taoran Fang, Yunchao Zhang, YANG YANG, Chunping Wang, Lei Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results under various pre-training strategies indicate that our method performs better than finetuning, with an average improvement of about 1.4% in full-shot scenarios and about 3.2% in few-shot scenarios. Moreover, our method significantly outperforms existing specialized prompt-based tuning methods when applied to models utilizing the pre-training strategy they specialize in. These numerous advantages position our method as a compelling alternative to fine-tuning for downstream adaptations. Our code is available at: https://github.com/zjunet/GPF.
Researcher Affiliation Collaboration Taoran Fang1, Yunchao Zhang1, Yang Yang1 , Chunping Wang2, Lei Chen2 1Zhejiang University, 2Fin Volution Group
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at: https://github.com/zjunet/GPF.
Open Datasets Yes As for the benchmark datasets, we employ the chemistry and biology datasets published by Hu et al. [2020a]. A comprehensive description of these datasets can be found in the appendix. [...] The Chemistry dataset, it comprises 2 million unlabeled molecules sampled from the ZINC15 database [Sterling and Irwin, 2015]. [...] For graph-level multi-task supervised pre-training, a preprocessed Ch EMBL dataset [Mayr et al., 2018, Gaulton et al., 2012] is employed.
Dataset Splits No The paper mentions limiting training samples for few-shot scenarios and tuning hyperparameters, which typically involves a validation set. However, it does not explicitly provide details about training/validation/test splits, percentages, or sample counts for a specific validation set (e.g., 'X% for validation').
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU models, or cloud computing instance types used for running experiments.
Software Dependencies No The paper mentions using a '5-layer GIN' model and provides hyper-parameter settings, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The projection head θ is selected from a range of [1, 2, 3]-layer MLPs with equal widths. The hyper-parameter k of GPF-plus is chosen from the range [5,10,20]. Further details on the hyper-parameter settings can be found in the appendix. [...] Table 11: The hyper-parameter settings.