Explaining Temporal Graph Models through an Explorer-Navigator Framework

Authors: Wenwen Xia, Mincai Lai, Caihua Shan, Yao Zhang, Xinnan Dai, Xiang Li, Dongsheng Li

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate the performance of T-GNNExplainer. Experimental results demonstrate that T-GNNExplainer can achieve superior performance with up to 50% improvement in Area under Fidelity-Sparsity Curve.
Researcher Affiliation Collaboration 1Shanghai Jiao Tong University, 2Shanghai Tech University, 3Microsoft Research Asia 4Fudan University, 5East China Normal University
Pseudocode Yes We present the pseudo-code in Appendix.
Open Source Code Yes The code and datasets are attached in the supplementary.
Open Datasets Yes Wikipedia2 and Reddit3... 2http://snap.stanford.edu/jodie/wikipedia.csv 3http://snap.stanford.edu/jodie/reddit.csv
Dataset Splits Yes We train both TGAT and TGN with a 70%, 15%, and 15% splitting scheme of datasets based on timestamps.
Hardware Specification Yes We use a machine with an RTX 2080 GPU and a 48-core Intel(R) Xeon(R) CPU@2.2GHz.
Software Dependencies No The paper describes model architectures and hyperparameters, but does not specify software dependencies like libraries or frameworks with their version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes We set the exploration parameter λ to 5 and the rollout number to 500 in the explorer. Following the same setting in TGAT and TGN, we adopt a two-layer attention architecture and harmonic encoding for timestamps. We train both TGAT and TGN with a 70%, 15%, and 15% splitting scheme of datasets based on timestamps. For all methods, we limit the number of candidate events to 25 and randomly sample 500 events in the test dataset as target events for the explanation. More hyper-parameters are listed in Appendix.