Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks
Authors: Kesen Zhao, Liang Zhang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments have been conducted on both synthetic and real-world datasets, where our approach yields substantial improvements, thereby demonstrating significant superiority. |
| Researcher Affiliation | Academia | Kesen Zhao City University of Hong Kong Hong Kong, China kesenzhao2-c@my.cityu.edu.hk Liang Zhang Shenzhen Research Institute of Big Data Guangdong, China zhangliang@sribd.cn |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The code and the dataset benchmarks are available 1https://github.com/kesenzhao/DyGNNExplainer |
| Open Datasets | Yes | Elliptic 2. http://www.kaggle.com/ellipticco/elliptic-data-set ... The code and the dataset benchmarks are available 1https://github.com/kesenzhao/DyGNNExplainer |
| Dataset Splits | Yes | We divide the dataset as training set and test set with a ratio of 8:2, which is a common setting in previous works. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA Tesla V100S GPU |
| Software Dependencies | No | Only the Adam optimizer was mentioned without a specific version number. No other software dependencies with version numbers were provided. |
| Experiment Setup | Yes | For the VGAE, we apply a two-layer GCN with output dimensions [32, 64, 128] and [16, 32, 64] in the encoder. The max time step T is set as 5. In the contrastive loss, the temperature coefficient τ, weight parameters α1 and α2 are set from [0.2, 0.5, 0.8]. In the final optimization objects, the loss function weight parameters λ1, λ2, λ3, and λ4 are set from [0.2, 0.4, 0.6, 0.8, 1]. And the best performance is obtained where λ1 = 1, λ2 = 0.4, λ3 = 0.2, and λ4 = 0.2. We trained the explainers using the Adam optimizer (Kingma & Ba, 2014) with a learning rate of [1e-2, 1e-3, 1e-4] and batch size 64. |