Improving Temporal Link Prediction via Temporal Walk Matrix Projection

Authors: Xiaodong Lu, Leilei Sun, Tongyu Zhu, Weifeng Lv

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on 13 benchmark datasets verify the effectiveness and efficiency of TPNet, where TPNet outperforms other baselines on most datasets and achieves a maximum speedup of 33.3 compared to the SOTA baseline.
Researcher Affiliation Academia Xiaodong Lu CCSE Lab, Beihang University Beijing, China xiaodonglu@buaa.edu.cn
Pseudocode Yes Algorithm 1: Node Representation Maintaining (G,λ,k,n,d R)
Open Source Code Yes Our code can be found at https://github.com/lxd99/TPNet.
Open Datasets Yes Experiments are conducted on the following 13 benchmark datasets collected by [16].
Dataset Splits Yes For dataset splitting, we chronologically split each dataset with 70%/15%/15% for training/validating/testing.
Hardware Specification Yes Experiments are conducted on a Ubuntu server, whose CPU and GPU devices are one Intel(R) Xeon(R) Gold 6226R CPU @ 2.9GHz with 64 CPU cores and four Ge Force RTX 3090 GPUs with 24 GB memory respectively.
Software Dependencies No The paper mentions software like 'Dy GLib' and 'pytorch' but does not provide specific version numbers for these or other software dependencies required for replication.
Experiment Setup Yes For TPNet, the layer l of node representations, the number of recent interactions m, and dimension d R of the node representations are set to 3, 20 and 10 log(2E), where E is the number of the interactions. We find the best time decay weight λ via grid search within a range of 10 4 to 10 7.