Measuring Task Similarity and Its Implication in Fine-Tuning Graph Neural Networks

Authors: Renhong Huang, Jiarong Xu, Xin Jiang, Chenglu Pan, Zhiming Yang, Chunping Wang, Yang Yang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superiority of the presented fine-tuning strategy is validated via numerous experiments with different pre-trained models and downstream tasks.
Researcher Affiliation Collaboration Renhong Huang1, 2 , Jiarong Xu2 , Xin Jiang3, Chenglu Pan1, Zhiming Yang2, Chunping Wang4, Yang Yang1 1Zhejiang University, 2Fudan University, 3Lehigh University, 4Fin Volution Group
Pseudocode No The paper describes the steps of its method verbally and with equations but does not include a formal pseudocode block or an 'Algorithm' section.
Open Source Code Yes Our codes are available at https://github.com/zjunet/Bridge-Tune.
Open Datasets Yes Datasets. We use a total of 12 downstream datasets for evaluation: US-Airport, Brazil-Airport, Europe-Airport, H-index, Wisconsin, Texas, Cora, Cornell, DD242, DD68, DD687, and the large-scale dataset Ogbarxiv.
Dataset Splits No The paper mentions using 12 downstream datasets but does not explicitly provide the training, validation, and test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not specify any hardware details such as GPU or CPU models used for the experiments.
Software Dependencies No The paper mentions pre-trained models and learning rates but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We set the learning rate as 5, 0.1, 0.1, 0.1 when fine-tuning GCC (Qiu et al. 2020), Graph CL (You et al. 2020), Edge Pred (Hamilton, Ying, and Leskovec 2017), and Context Pred (Hu et al. 2020b) respectively. We utilize mini-batch training and the batch size is 32. The total iterations of fine-tuning is 30, alternating between one iteration of pre-trained model refinement and one iteration of downstream fine-tuning.