Scaling Up Dynamic Graph Representation Learning via Spiking Neural Networks

Authors: Jintang Li, Zhouxin Yu, Zulun Zhu, Liang Chen, Qi Yu, Zibin Zheng, Sheng Tian, Ruofan Wu, Changhua Meng

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three large real-world temporal graph datasets demonstrate that Spike Net outperforms strong baselines on the temporal node classification task with lower computational costs.
Researcher Affiliation Collaboration Jintang Li1, Zhouxin Yu1, Zulun Zhu2, Liang Chen1*, Qi Yu2, Zibin Zheng1, Sheng Tian3, Ruofan Wu3, Changhua Meng3 1Sun Yat-sen University 2Rochester Institute of Technology 3Ant Group
Pseudocode No The paper includes equations and a high-level overview figure (Figure 2) of the Spike Net framework, but it does not present any formal pseudocode blocks or algorithms.
Open Source Code No The paper does not contain any explicit statement about releasing the source code or provide a link to a code repository for the methodology described.
Open Datasets Yes In this section, we conduct experiments on three large real-world graph datasets: DBLP, Tmall (Lu et al. 2019), and Patent (Hall, Jaffe, and Trajtenberg 2001). The datasets statistics are listed in Table 1.
Dataset Splits Yes We follow (Lu et al. 2019) and examine the performance when different sizes of training datasets are used, i.e., 40%, 60%, and 80%, including 5% for validation.
Hardware Specification Yes The experiments were run on a Titan RTX GPU with the same batch size (except for Evolve GCN which is fully batch trained) for a fair comparison.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes We cownduct temporal node classification on DBLP and Tmall by varying the values of τth and γ as {1.0, 0.9, 0.8, 0.7, 0.6} and {0., 0.1, 0.2, 0.3, 0.4}, respectively. We vary the smooth factor α from {0.5, 1.0, 2.0, 5.0, 10.0} to study the effects in the surrogate function σ( ).