EvoluNet: Advancing Dynamic Non-IID Transfer Learning on Graphs

Authors: Haohui Wang, Yuzhen Mao, Yujun Yan, Yaoqing Yang, Jianhui Sun, Kevin Choi, Balaji Veeramani, Alison Hu, Edward Bowen, Tyler Cody, Dawei Zhou

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental EVOLUNET outperforms the stateof-the-art models by up to 12.1%, demonstrating its effectiveness in transferring knowledge from dynamic source graphs to dynamic target graphs. In this section, we evaluate the performance of EVOLUNET on six benchmark datasets.
Researcher Affiliation Collaboration 1Department of Computer Science, Virginia Tech, Blacksburg, VA, USA. 2Department of Computer Science, Dartmouth College, Hanover, NH, USA. 3Deloitte & Touche LLP, USA. 4Virginia Tech National Security Institute, Arlington, VA, USA.
Pseudocode Yes We provide the pseudo-code of EVOLUNET in Algorithm 1 and we employ Adam (Kingma & Ba, 2015) as the optimizer. Algorithm 1 The EVOLUNET Learning Framework.
Open Source Code Yes We publish our data and code at https://github.com/ wanghh7/Evolu Net. Reproducibility: We have released our code and data at https://github.com/wanghh7/Evolu Net.
Open Datasets Yes Datasets: We evaluate EVOLUNET on our benchmark which is composed of three real-world graphs, including two graphs extracted from Digital Bibliography & Library Project: DBLP-3 and DBLP-5 (Fan et al., 2021)... and one graph generated from human connectome project: HCP (Fan et al., 2021)
Dataset Splits No We conduct experiments with only five labeled samples in each class of the target dataset and test model performance based on all the rest of the unlabeled nodes. This indicates how samples are used for training/testing but does not specify explicit dataset splits (e.g., percentages or counts for train/validation/test sets).
Hardware Specification Yes The experiments are performed on a Ubuntu20 machine with 16 3.8GHz AMD Cores and a single 24GB NVIDIA Ge Force RTX3090.
Software Dependencies No The paper mentions using 'Adam' as the optimizer but does not specify version numbers for other software dependencies (e.g., Python, PyTorch, TensorFlow, specific libraries).
Experiment Setup Yes For a fair comparison, the output dimensions of all GNNs including baselines and EVOLUNET are set to 16. We conduct experiments with only five labeled samples in each class of the target dataset... for classical GNNs, they are trained on the target dataset for 1000 epochs; for transfer learning models, after training on the source dataset for 2000 epochs, they are fine-tuned on the target dataset for 600 epochs... For EVOLUNET, it is firstly pre-trained for 2000 epochs, then fine-tuned on the target dataset for 600 epochs... We use Adam optimizer with learning rate 3e-3. We run all the experiments with 25 random seeds.