Transfer Learning of Graph Neural Networks with Ego-graph Information Maximization

Authors: Qi Zhu, Carl Yang, Yidan Xu, Haonan Wang, Chao Zhang, Jiawei Han

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct controlled synthetic experiments to directly justify our theoretical conclusions. Comprehensive experiments on two real-world network datasets show consistent results in the analyzed setting of direct-transfering, while those on large-scale knowledge graphs show promising results in the more practical setting of transfering with fine-tuning.1
Researcher Affiliation Academia 1University of Illinois Urbana-Champaign, 2Emory University, 3University of Washington, 4Georgia Institute of Technology
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present any structured algorithmic blocks.
Open Source Code Yes Code and processed data are available at https://github.com/Gentle Zhu/EGI.
Open Datasets Yes We use two real-world network datasets with role-based node labels: (1) Airport [45] contains three networks from different regions Brazil, USA and Europe. ... (2) Gene [68] contains the gene interactions regarding 50 different cancers. ... The source graph contains a cleaned full dump of 579K entities from YAGO [49]
Dataset Splits No The paper describes using source and target graphs for training and testing, and mentions concepts like 'pre-training' and 'fine-tuning'. However, it does not provide specific details on train/validation/test splits, such as percentages, sample counts, or references to predefined splits within the datasets used.
Hardware Specification Yes Our experiments were run on an AWS g4dn.2xlarge machine with 1 Nvidia T4 GPU.
Software Dependencies No The paper mentions software components like 'Adam as optimizer' and GNN encoders like 'GIN' and 'GCN', but it does not specify any version numbers for these software dependencies (e.g., 'Adam 1.0' or 'PyTorch 1.9').
Experiment Setup Yes The main hyperparameter k is set 2 in EGI as a common practice. We use Adam [27] as optimizer and learning rate is 0.01. All baselines are set with the default parameters. The GNN parameters are frozen during the MLP training.