Subgraph Pooling: Tackling Negative Transfer on Graphs

Authors: Zehong Wang, Zheyuan Zhang, Chuxu Zhang, Yanfang Ye

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate its superiority under various settings. The proposed SP methods are effective yet elegant, which can be easily applied on top of any backbone Graph Neural Networks (GNNs). Our code and data are available at: https: //github.com/Zehong-Wang/Subgraph-Pooling. 4 Experiments 4.1 Experimental Setup Datasets. We use Citation [Wu et al., 2020], consisting of ACMv9 and DBLPv8; Airport [Ribeiro et al., 2017], including Brazil, USA, and Europe; Twitch [Rozemberczki et al., 2021] collected from six countries, including DE, EN, ES, FR, PT, RU; Arxiv [Hu et al., 2020] consisting papers with varying publish times; and dynamic financial network Elliptic [Weber et al., 2019] that contains dozens of graph snapshots where each node is a Bitcoin transaction. Baselines. We include four GNN backbones: GCN [Kipf and Welling, 2017], SAGE [Hamilton et al., 2017], GAT [Veliˇckovi c et al., 2018], and SGC [Wu et al., 2019]. Settings. We pre-train the model on the source with 60 percent of labeled nodes and adapt the model to the target. The adaptation involves three settings: (1) directly applying the pre-trained model without any fine-tuning (Without FT); (2) fine-tuning the last layer (classifier) of the model (FT Last Layer); (3) fine-tuning all parameters (Fully FT). We take split 10/10/80 to form train/valid/test sets on the target graph.
Researcher Affiliation Academia 1University of Notre Dame, Indiana, USA 2Brandeis University, Massachusetts, USA
Pseudocode No The paper describes the methods (Subgraph Pooling and Subgraph Pooling++) using prose and mathematical equations, but it does not include a dedicated pseudocode block or algorithm listing.
Open Source Code Yes Our code and data are available at: https: //github.com/Zehong-Wang/Subgraph-Pooling.
Open Datasets Yes Datasets. We use Citation [Wu et al., 2020], consisting of ACMv9 and DBLPv8; Airport [Ribeiro et al., 2017], including Brazil, USA, and Europe; Twitch [Rozemberczki et al., 2021] collected from six countries, including DE, EN, ES, FR, PT, RU; Arxiv [Hu et al., 2020] consisting papers with varying publish times; and dynamic financial network Elliptic [Weber et al., 2019] that contains dozens of graph snapshots where each node is a Bitcoin transaction.
Dataset Splits Yes We take split 10/10/80 to form train/valid/test sets on the target graph.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not explicitly list software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would allow for reproducible setup of the environment.
Experiment Setup Yes We pre-train the model on the source with 60 percent of labeled nodes and adapt the model to the target. The adaptation involves three settings: (1) directly applying the pre-trained model without any fine-tuning (Without FT); (2) fine-tuning the last layer (classifier) of the model (FT Last Layer); (3) fine-tuning all parameters (Fully FT). We take split 10/10/80 to form train/valid/test sets on the target graph.