Inductive representation learning on temporal graphs
Authors: da Xu, chuanwei ruan, evren korpeoglu, sushant kumar, kannan achan
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches. |
| Researcher Affiliation | Industry | Walmart Labs Sunnyvale, CA 94086, USA {Da.Xu,Chuanwei.Ruan,EKorpeoglu,SKumar4,KAchan}@walmartlabs.com |
| Pseudocode | No | The paper describes the architecture and algorithms using textual explanations and mathematical formulas, but it does not include explicit pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of its own source code or a direct link to a code repository for the proposed TGAT method. It only refers to implementations of baseline models. |
| Open Datasets | Yes | Reddit dataset.2 We use the data from active users and their posts under subreddits... http://snap.stanford.edu/jodie/reddit.csv Wikipedia dataset.3 We use the data from top edited pages and active users... http://snap.stanford.edu/jodie/wikipedia.csv |
| Dataset Splits | Yes | We do the chronological train-validation-test split with 70%-15%-15% according to node interaction timestamps. |
| Hardware Specification | Yes | Node2vec using the official C code5 on a 16-core Linux server with 500 Gb memory. All the deep learning models are trained on a machine with one Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions software like 'Py Torch geometric library', 'Py Torch', and 'official C code'/'official python code' for baselines, but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | We use the time-sensitive link prediction loss function for training the l-layer TGAT network... We fix the node embedding dimension and the time encoding dimension to be the original feature dimension for simplicity, and then select the number of TGAT layers from {1,2,3}, the number of attention heads from {1,2,3,4,5}... we use neighborhood dropout (selected among p ={0.1, 0.3, 0.5})... During training, we use 0.0001 as learning rate for Reddit and Wikipedia dataset and 0.001 for the industrial dataset, with Glorot initialization and the Adam SGD optimizer. Using two TGAT layers and two attention heads with dropout rate as 0.1 give the best validation performance. |