Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs

Authors: Ming Jin, Yuan-Fang Li, Shirui Pan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method demonstrates overwhelming superiority under both transductive and inductive settings on six real-world datasets
Researcher Affiliation Academia Ming Jin Monash University ming.jin@monash.edu Yuan-Fang Li Monash University yuanfang.li@monash.edu Shirui Pan Griffith University s.pan@griffith.edu.au
Pseudocode Yes Algorithm 1 Sampling Temporal Walks
Open Source Code Yes Code is available at https://github.com/KimMeen/Neural-Temporal-Walks
Open Datasets Yes We evaluate model performance on six real-world datasets. College Msg [17] is a social network dataset... Enron [17] is an email communication network. Taobao [46] is an attributed user behavior dataset... MOOC [17] is an attributed network... Wikipedia and Reddit [15] are two bipartite interaction networks...
Dataset Splits Yes In transductive link prediction, we sort and divide all N interactions in a dataset by time into three separate sets for training, validation, and testing. Specifically, the ranges of training, validation, and testing sets are [0, Ntrn), [Ntrn, Nval), [Nval, N], where Ntrn/N and Nval/N are 0.7 and 0.85.
Hardware Specification No The paper states, 'Some computing resources for this project are supported by MASSIVE 2.' and provides a URL to the MASSIVE website. However, it does not specify any exact GPU models, CPU models, or other detailed hardware specifications used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' and the 'Runge-Kutta method' for solving ODEs, but it does not specify version numbers for any programming languages, libraries, or key software components required for replication.
Experiment Setup Yes Training Details. We implement and train all models under a unified evaluation framework with the Adam optimizer. The tuning of primary hyperparameters is discussed in Appendix C.3. In solving ODEs, we use the Runge-Kutta method with the number of function evaluations set to 8 by default. For fair comparisons and simplicity, we use sum-pooling when calculating node representations in both our method and CAWs. We also test Neur TWs , which is equipped with the binary anonymization, while Neur TWs adopts the default unitary anonymization. All methods are tuned thoroughly with nonlinear 2-layer and 3-layer perceptrons to conduct downstream link prediction and node classification tasks, and we adopt the commonly used Area Under the ROC Curve (AUC) and Average Precision (AP) as the evaluation metrics.