Union Subgraph Neural Networks
Authors: Jiaxing Xu, Aihu Zhang, Qingtian Bian, Vijay Prakash Dwivedi, Yiping Ke
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 18 benchmarks of both graph-level and node-level tasks demonstrate that Union SNN outperforms state-of-the-art baseline models, with competitive computational efficiency. |
| Researcher Affiliation | Academia | Nanyang Technological University, Singapore |
| Pseudocode | No | The paper describes the model using mathematical equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Angus Monroe/Union SNN. |
| Open Datasets | Yes | For graph classification, we use 10 benchmark datasets. Eight of them were selected from the TUDataset (Kersting et al. 2016), including MUTAG, PROTEINS, ENZYMES, DD, FRANKENSTEIN (denoted as FRANK in our tables), Tox21, NCI1 and NCI109. The other two datasets OGBG-MOLHIV and OGBG-MOLBBBP were selected from Open Graph Benchmark (Hu et al. 2020). |
| Dataset Splits | Yes | Time cost (hours) for a single run with 10-fold CV, including training, validation, test (excluding preprocessing). (from Table 12 caption) and Graph classification results (average accuracy standard deviation) over 10-fold-CV. (from Table 3 caption). |
| Hardware Specification | No | The paper does not provide specific details on the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used for implementation (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The implementation details of our experiments are available in our ar Xiv version (Xu et al. 2023). This indicates that comprehensive setup details like hyperparameters are not in the provided document. |