Zero-shot Transfer Learning within a Heterogeneous Graph via Knowledge Transfer Networks

Authors: Minji Yoon, John Palowitch, Dustin Zelle, Ziniu Hu, Ruslan Salakhutdinov, Bryan Perozzi

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental KTN improves performance of 6 different types of HGNN models by up to 960% for inference on zero-labeled node types and outperforms state-of-the-art transfer learning baselines by up to 73% across 18 different transfer learning tasks on HGs.
Researcher Affiliation Collaboration Minji Yoon Carnegie Mellon University John Palowitch Google Research Dustin Zelle Google Research Ziniu Hu University of California Los Angeles Ruslan Salakhutdinov Carnegie Mellon University Bryan Perozzi Google Research
Pseudocode Yes Algorithm 1 Training step on a source domain; Algorithm 2 Test step on a target domain
Open Source Code No The paper does not include an explicit statement about releasing its source code or provide a link to a code repository for the methodology described.
Open Datasets Yes Open Academic Graph (OAG). A dataset introduced in (44) composed of five types of nodes: papers (P), authors (A), institutions (I), venues (V), fields (F) and their corresponding relationships. ... Pub Med.(39) A network composed of of four types of nodes: genes (G), diseases (D), chemicals (C), and species (S), and their corresponding relationships.
Dataset Splits No The paper describes training and test phases but does not specify exact percentages or counts for training, validation, and test splits, nor does it mention cross-validation. For example, it does not state '80% for training, 10% for validation, 10% for testing'.
Hardware Specification No The paper mentions 'GPUs are partially supported by AWS Cloud Credit for Research program.' but does not specify any particular GPU models, CPU models, memory sizes, or specific cloud instance types used for the experiments.
Software Dependencies No The paper mentions the use of 'HMPNN as our base HGNN model' and 'py G (7), TF-GNN (6), DGL (34)' as popular GNN libraries, but it does not specify version numbers for any of these software dependencies.
Experiment Setup No The paper describes the general experimental process (training/testing phases, problem definition) but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training configurations.