On the Equivalence Between Temporal and Static Equivariant Graph Representations
Authors: Jianfei Gao, Bruno Ribeiro
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate a simple architecture based on our time-then-graph framework on one synthetic dataset and five different real-world datasets. We also evaluate eight state-of-the-art temporal graph representation baselines on the same tasks. Each experiment is repeated ten times with different random initialization. |
| Researcher Affiliation | Academia | 1 Department of Computer Science, Purdue University, West Lafayette, IN 47906, USA . |
| Pseudocode | No | The paper describes methods formally using mathematical equations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Source code. |
| Open Datasets | Yes | We evaluate a simple architecture based on our time-then-graph framework on one synthetic dataset and five different real-world datasets. ... Please refer to Appendix B.4 for a in-depth description of our datasets, whose general statistics are shown in Table 1. |
| Dataset Splits | Yes | In all experiments, datasets are split into 70% for training, 10% for validation, and 20% for test. |
| Hardware Specification | Yes | We collect the peak GPU memory and average training time per minibatch from all tasks utilizing GPU resources on a Ge Force RTX 2080 Ti. |
| Software Dependencies | No | The paper refers to common deep learning architectures and components but does not provide specific software or library version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For each learning rate configuration, we run 10 times and collect corresponding mean performance, and select the best configuration according to the mean performance on validation set. ... For the simplest task Dyn CSL, we train all methods by 30 epochs. For the largest task Brain10, we train by 200 epochs to ensure convergence of all methods. On Pe MS and COVID datasets, we train by 100 epochs. |