Long Range Propagation on Continuous-Time Dynamic Graphs

Authors: Alessio Gravina, Giulio Lovisotto, Claudio Gallicchio, Davide Bacciu, Claas Grohnfeldt

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we show how CTAN s (i) long-range modeling capabilities are substantiated by theoretical findings and how (ii) its empirical performance on synthetic long-range benchmarks and real-world benchmarks is superior to other methods. Our results motivate CTAN s ability to propagate longrange information in C-TDGs as well as the inclusion of long-range tasks as part of temporal graph models evaluation.
Researcher Affiliation Collaboration 1Department of Computer Science, University of Pisa, Pisa, Italy 2Huawei Technologies, Munich, Germany.
Pseudocode No The paper provides mathematical formulations and descriptions of its methods but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code Yes We release the long-range benchmarks and the code implementing our methodology and reproducing our analysis at https://github.com/gravins/non-dissipative-propagation-CTDGs.
Open Datasets Yes For the C-TDG benchmarks we consider four well-known datasets proposed by (Kumar et al., 2019), Wikipedia, Reddit, Last FM, and MOOC, to assess the model performance in real-world setting, with the task of future link prediction (Kumar et al., 2019).
Dataset Splits Yes For all the datasets, we considered the same chronological split into train/val/test with the ratios 70%-15%-15% as proposed by (Xu et al., 2020).
Hardware Specification Yes Each model is run with an embedding dimension equal to 100 on an Intel(R) Xeon(R) Gold 6278C CPU @ 2.60GHz.
Software Dependencies No The paper mentions implementing methods and using an optimizer (Adam), but does not provide specific version numbers for any software libraries or dependencies (e.g., PyTorch version, Python version, scikit-learn version).
Experiment Setup Yes We perform hyper-parameter tuning via grid search, considering a fixed parameter budget based on the number of graph convolutional layers (GCLs). Specifically, for the maximum number of GCL in the grid, we select the embedding dimension so that the total number of parameters matches the budget; such embedding dimension is used across every other configuration. We report more detailed information on each task in their respective subsections. Detailed information about hyper-parameter grids and training of models are in Appendix E.