PRES: Toward Scalable Memory-Based Dynamic Graph Neural Networks

Authors: Junwei Su, Difan Zou, Chuan Wu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our theoretical analysis and the effectiveness of our proposed method, we conduct an extensive experimental study. The experimental results (Sec. 6.1) demonstrate that our approach enables the utilization of up to 4 larger temporal batch (3.4 speed-up) during MDGNN training, without compromising model generalization performance. (from Section 1, page 2); 6 EXPERIMENT We present an experimental study to validate our theoretical results and evaluate the effectiveness of our proposed method, PRES. (from Section 6, page 5)
Researcher Affiliation Academia Junwei Su, Difan Zou & Chuan Wu Department of Computer Science, University of Hong Kong {jwsu,dzou,cwu}@cs.hku.hk
Pseudocode Yes Appendix A contains pseudocode for the training procedure, along with a more detailed description. (from Section 3, page 4); Algorithm 1 Standard Training Procedure for Memory-based DGNN (from Appendix A, page 8); Algorithm 2 PRES (from Appendix A, page 9)
Open Source Code Yes The implementation is available at: https://github.com/jwsu825/MDGNN_BS
Open Datasets Yes We use four public dynamic graph benchmark datasets (Kumar et al., 2019), REDDIT, WIKI, MOOC, LASTFM and GDELT. Details of these datasets are described in the appendix.
Dataset Splits Yes The training procedure of a memory-based dynamic graph neural network (MDGNN) involves several steps. First, the dataset is divided into training, validation, and test sets using a chronological split.
Hardware Specification Yes All the experiments of this paper are conducted on the following machine CPU: two Intel Xeon Gold 6230 2.1G, 20C/40T, 10.4GT/s, 27.5M Cache, Turbo, HT (125W) DDR4-2933 GPU: four NVIDIA Tesla V100 SXM2 32G GPU Accelerator for NV Link Memory: 256GB (8 x 32GB) RDIMM, 3200MT/s, Dual Rank OS: Ubuntu 18.04LTS
Software Dependencies No The paper mentions the operating system (OS: Ubuntu 18.04LTS) but does not provide specific version numbers for other software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or libraries.
Experiment Setup Yes We closely follow the settings of (Zhou et al., 2022) for hyperparameters and the chronological split of datasets, as described in more detail in the appendix. (from Section 6, page 6); The results are averaged over five independent trials with β = 0.1 for PRES and 50 epoches. (from Table 1 caption, page 5)