Learning to Count Isomorphisms with Graph Neural Networks
Authors: Xingtong Yu, Zemin Liu, Yuan Fang, Xinming Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically evaluate the proposed model Count-GNN in comparison to the state of the art. We conduct the evaluation on four datasets shown in Table 1. |
| Researcher Affiliation | Academia | Xingtong Yu1*, Zemin Liu2*, Yuan Fang3 , Xinming Zhang1 1 University of Science and Technology of China, China 2 National University of Singapore, Singapore 3 Singapore Management University, Singapore |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | Yes | In particular, SMALL and LARGE are two synthetic datasets, which are generated by the query and graph generators presented by a previous study (Liu et al. 2020). On the other hand, MUTAG (Zhang et al. 2018) and OGB-PPA (Hu et al. 2020) are two real-world datasets. |
| Dataset Splits | Yes | For the SMALL and LARGE datasets, we randomly sample 5000 triples for training, 1000 for validation, and the rest for testing. For MUTAG, due to its small size, we randomly sample 1000 triples for training, 100 for validation, and the rest for testing. For OGB-PPA, we divide the triples into training, validation and testing sets with a proportion of 4:1:5. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We report the settings of Count GNN in Appendix E. The dimension of edge embeddings is set to 16. We apply 4 edge-centric GNN layers. For training, we use Adam as the optimizer with a learning rate of 0.001. The batch size for SMALL, LARGE, MUTAG, OGB-PPA are 1024, 512, 128, 256 respectively. We apply a weight decay of 0.00001, and λ = 0.0001, μ = 0.000001. We train for 100 epochs, and use early stopping with patience 100 for SMALL and LARGE and 20 for MUTAG and OGB-PPA. |