Towards Open Temporal Graph Neural Networks
Authors: Kaituo Feng, Changsheng Li, Xiaolu Zhang, JUN ZHOU
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three real-world datasets of different domains demonstrate the superiority of our method, compared to the baselines. |
| Researcher Affiliation | Collaboration | Kaituo Feng Beijing Institute of Technology kaituofeng@gmail.com Changsheng Li Beijing Institute of Technology lcs@bit.edu.cn Xiaolu Zhang Ant Group yueyin.zxl@antfin.com Jun Zhou Ant Group jun.zhoujun@antfin.com |
| Pseudocode | Yes | Algorithm 2 OTGNet: Open Temporal Graph Neural Networks |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is open or publicly available. |
| Open Datasets | Yes | We construct three real-world datasets to evaluate our method: Reddit (Hamilton et al., 2017), Yelp (Sankar et al., 2020), Taobao (Du et al., 2019). |
| Dataset Splits | Yes | For each task, we use 80% nodes for training, 10% nodes for validation, 10% nodes for testing. |
| Hardware Specification | Yes | We perform our experiments using Ge Force RTX 3090 Ti GPU. |
| Software Dependencies | No | The paper mentions using the 'Adam optimizer' but does not specify version numbers for any software, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | For each task, we use 80% nodes for training, 10% nodes for validation, 10% nodes for testing. We use the Adam optimizer for training with learning rate η = 0.0001 on the Reddit dataset, learning rate η = 0.005 on the Yelp datasets and learning rate η = 0.001 on the Taobao datasets. For the Reddit dataset and the Yelp dataset, we train each task 500 epochs. For the Taobao dataset, we train each task 100 epochs. We set the dropout rate to 0.5 on all the datasets. The node classification head is a two-layer MLP with hidden size 128. The selected triad pairs per class M is set to 10 on all datasets. The sub-network extracting class-agnostic information is a two-layer MLP with hidden size 100. |