GraphPulse: Topological representations for temporal graph property prediction
Authors: Kiarash Shamsi, Farimah Poursafaei, Shenyang Huang, Bao Tran Gia Ngo, Baris Coskunuzer, Cuneyt Gurcan Akcora
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experimentation, we demonstrate that our model enhances the ROC-AUC metric by 10.2% in comparison to the top-performing state-of-the-art method across various temporal networks. |
| Researcher Affiliation | Collaboration | 1Department of computer science, University of Manitoba, 2Mila Quebec AI Institute, 3School of Computer Science, Mc Gill University, 4University of Texas at Dallas, 5AI Institute University of Central Florida |
| Pseudocode | No | The paper describes the methodology in prose and with flowcharts (Figure 3) but does not include any explicit pseudocode blocks or algorithm listings. |
| Open Source Code | Yes | We provide the implementation of Graph Pulse at https://github.com/kiarashamsi/Graph Pulse. |
| Open Datasets | Yes | We perform experiments on Math Overflow (Paranjape et al., 2017) and Reddit-Body (Kumar et al., 2018) datasets, and seven ERC20 token networks that we have extracted from the Ethereum blockchain. The datasets used in this study are publicly available |
| Dataset Splits | Yes | Based on the chronological order, the graphs are divided into 80% training and 20% testing data... For all methods, we utilized a chronological %80 %20 split of the graph snapshot sequence as our train-validation and test data, respectively. |
| Hardware Specification | Yes | We ran all experiments on a Dell Power Edge R630, featuring an Intel Xeon E5-2650 v3 Processor (10-cores, 2.30 GHz, 20MB Cache), and 192GB of RAM (DDR42133MHz). |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'LSTM+GRU model' but does not specify their versions or the versions of underlying deep learning frameworks (e.g., PyTorch, TensorFlow) or programming languages. |
| Experiment Setup | Yes | We set the Mapper hyperparameters as cls = 5, n cubes = 2, and perc overlap = 0.4. GIN and TDA-GIN models use a Graph Isomorphism Network with 64 hidden units followed by a target output dimension of two. Raw RNN and TDA RNN models utilize LSTM and GRU layers with an Adam optimizer and a learning rate of 1e-4. A hybrid LSTM-GRU model processes sequences in a (7,3) and (7,5) format for input, respectively. We set the final embedding dimension as 16. For HTGN, the number of historical windows in the HTA module is set to 5. |