Deep Temporal Graph Clustering

Authors: Meng Liu, Yue Liu, KE LIANG, Wenxuan Tu, Siwei Wang, sihang zhou, Xinwang Liu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the superiority of the proposed framework TGC, we conduct extensive experiments. The experimental results show that temporal graph clustering enables more flexibility in finding a balance between time and space requirements, and our framework can effectively improve the performance of existing temporal graph learning methods.
Researcher Affiliation Collaboration 1National University of Defense Technology, Changsha, China 2Intelligent Game and Decision Lab, Beijing, China
Pseudocode No The paper describes its method using mathematical formulations and descriptive text, but it does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is released: https://github.com/MGit Hub L/ Deep-Temporal-Graph-Clustering.
Open Datasets Yes DBLP (Zuo et al., 2018) is a co-author graph from the DBLP website... Brain (Preti et al., 2017) is a human brain tissue connectivity graph... Patent (Hall et al., 2001) is a patent citation graph of US patents. School (Mastrandrea et al., 2015) is a high school dataset... ar Xiv AI and ar Xiv CS (Wang et al., 2020) are two public citation graphs from the ar Xiv website... Their original data are from the OGB benchmark (Wang et al., 2020)...
Dataset Splits No The paper does not explicitly provide specific details about training/validation/test dataset splits, such as percentages or sample counts for a distinct validation set.
Hardware Specification Yes Our proposed TGC framework is implemented with Py Torch, and all models are running on NVIDIA RTX 3070Ti GPUs (8GB), 64GB RAM, 3.2GHz Intel i9-12900KF CPU.
Software Dependencies No The paper mentions that the framework is "implemented with Py Torch" but does not specify the version number of PyTorch or any other software dependencies with their corresponding versions.
Experiment Setup Yes We utilize Adam as the optimizer and select the value of hyper-parameters embedding dimension size d, batch size, historical sequence length l, negative sampling size Q, and learning rate as 128, 1024, 3, 5, and 0.01, respectively. We set the epoch number T 200 on all datasets.