Adaptive Data Augmentation on Temporal Graphs
Authors: Yiwei Wang, Yujun Cai, Yuxuan Liang, Henghui Ding, Changhu Wang, Siddharth Bhatia, Bryan Hooi
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on standard datasets show that Me TA yields significant gains for the popular TGN models on edge prediction and node classification in an efficient manner. We evaluate our Me TA on edge prediction and node classification tasks using the standard temporal graph datasets: Reddit [3], Wikipedia [20], MOOC [16]. We measure its performance through the metrics: test accuracy, average precision (AP), and the area under the ROC accuracy curve (AUC), under inductive and transductive settings. Overall, Me TA achieves substantial improvements for popular TGN models [25, 39] and enhances them to outperform the baseline methods. |
| Researcher Affiliation | Collaboration | Yiwei Wang1 Yujun Cai2 Yuxuan Liang1 Henghui Ding3 Changhu Wang3 Siddharth Bhatia1 Bryan Hooi1 1 National University of Singapore 2 Nanyang Technological University 3 Byte Dance |
| Pseudocode | Yes | Overall, we visualize our Me TA method in Fig. 2 and summarize it in Alg. 1 (see Appendix). |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a repository link, an explicit statement of code release, or mention of code in supplementary materials) for the methodology described. |
| Open Datasets | Yes | We use three standard temporal graph datasets: MOOC [16], Reddit [3], and Wikipedia [16] for evaluation. The statistics of these datasets are shown in Table 1. |
| Dataset Splits | Yes | We use a chronological train-validation-test split with a ratio of 70%-15%-15% following [25, 39]. |
| Hardware Specification | Yes | Time is the training time until convergence using a Linux Server with an Intel(R) Xeon(R) E5-1650 v4 @ 3.60GHz CPU and a Ge Force GTX 1080 Ti GPU. |
| Software Dependencies | No | The paper mentions common components like RNNs (LSTM or GRU) but does not provide specific software dependencies (e.g., library names with version numbers like PyTorch 1.x or Python 3.x) used for implementation. |
| Experiment Setup | Yes | For the hyper-parameters of our Me TA, we set the number of memory levels as L = 2, the data augmentation magnitudes as p1 = 0.1, p2 = 0.8, and the memory transition period as T = 10, by default. Note that these settings are fixed for all experiments unless specifically indicated to be changed. |