Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation

Authors: Joonhyung Park, Hajin Shim, Eunho Yang7966-7974

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show the consistent superiority of our method over other basic data augmentation baselines. We also demonstrate that Graph Transplant enhances the performance in terms of robustness and model calibration.
Researcher Affiliation Collaboration Joonhyung Park*1, Hajin Shim*1, Eunho Yang 1,2 1Korea Advanced Institute of Science and Technology (KAIST) 2AITRICS
Pseudocode Yes Algorithm 1: Partial K-hop, Algorithm 2: Graph Transplant EP, Algorithm 3: Train with Graph Transplant EP
Open Source Code No The paper does not provide an explicit statement about releasing source code or a direct link to a code repository for the described methodology.
Open Datasets Yes To demonstrate that Graph Transplant brings consistent improvement across various domains and dataset size , we conduct the experiment on 8 benchmark datasets: COLLAB (Yanardag and Vishwanathan 2015) for social networks dataset, ENZYMES (Schomburg et al. 2004), obgbppa (Hu et al. 2020) for bioinformatics dataset, COILDEL (Riesen and Bunke 2008) for computer vision dataset, and NCI1 (Wale, Watson, and Karypis 2008), Mutagenicity (Kazius, Mc Guire, and Bursi 2005), NCI-H23, MOLT-4, P388 (Yan et al. 2008) for molecules datasets.
Dataset Splits Yes We evaluate the model with 5-fold cross validation. For small-scale datasets, the experiments are repeated three times so the total of 15 different train/validation/test stratified splits with the ratio of 3:1:1.
Hardware Specification No The paper does not specify the hardware used for running the experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions GNN architectures (GCN, GCS, GAT, GIN) but does not provide specific version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup Yes For small and medium-scale datasets, we train the GNNs for 1000 epochs under the early stopping condition and terminate training when there is no further increase in validation accuracy for 1500 iterations. Similarly, the learning rate is decayed by 0.5 if there is no improvement in validation loss for 1000 iterations.