Direct Embedding of Temporal Network Edges via Time-Decayed Line Graphs
Authors: Sudhanshu Chanpuriya, Ryan A. Rossi, Sungchul Kim, Tong Yu, Jane Hoffswell, Nedim Lipka, Shunan Guo, Cameron N Musco
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on real-world networks demonstrate our method s efficacy and efficiency on both link classification and prediction. |
| Researcher Affiliation | Collaboration | 1University of Massachusetts Amherst, {schanpuriya,cmusco}@cs.umass.edu 2Adobe Research, {ryrossi,sukim,tyu,jhoffs,lipka,sguo}@adobe.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release code in the form of a Jupyter notebook (P erez & Granger, 2007) demo which is available at github.com/schariya/tdlg. |
| Open Datasets | Yes | We evaluate our method on two kinds of tasks, edge classification and temporal link prediction, on a collection of five real-world temporal network datasets. The statistics for these networks are provided in Table 2, but we defer discussion of these datasets to Appendix A.2. |
| Dataset Splits | No | We make random 70%/30% splits of training/test data, and report test AUC of binary classification across 10 trials with 10 random splits. The paper specifies train/test splits but does not explicitly mention a separate validation set or split for hyperparameter tuning. |
| Hardware Specification | Yes | All experiments are run on an Xeon Gold 6130 CPU and Tesla v100 16GB GPU |
| Software Dependencies | No | We use the scikit-learn (Pedregosa et al., 2011) implementation of logistic regression. The paper mentions software tools like scikit-learn and Jupyter notebook, but does not provide specific version numbers for these or other dependencies required for reproducibility. |
| Experiment Setup | Yes | For all embedding methods, we use the scikit-learn (Pedregosa et al., 2011) implementation of logistic regression; we increase the maximum iterations to 103, and, since the edge classes are generally imbalanced across the datasets, we set the class weight option to balanced, which adjusts loss weights inversely in proportion to class frequency. We keep otherwise default options. For our TDLG method, we set the time scale hyperparameter σt as ratio of the standard deviation of the edges times; calling this standard deviation be σT , we use σt = 10 1 σT , which is chosen by informal tuning. |