An Empirical Study of Realized GNN Expressiveness
Authors: Yanbo Wang, Muhan Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiment gives the first thorough measurement of the realized expressiveness of those stateof-the-art beyond-1-WL GNN models and reveals the gap between theoretical and realized expressiveness. We synthetically test 23 models with higher-than-1-WL expressiveness on BREC. |
| Researcher Affiliation | Academia | 1Institute of Artificial Intelligence, Peking University, Beijing, China. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks clearly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Dataset and evaluation codes are released at: https://github.com/Graph PKU/BREC. |
| Open Datasets | Yes | To overcome the limitations of previous datasets for a more meaningful empirical evaluation of realized expressiveness... we first propose BREC, a new expressiveness dataset... Dataset and evaluation codes are released at: https://github.com/Graph PKU/BREC. |
| Dataset Splits | Yes | Our core idea is to measure models practical separating power directly. Thus BREC is organized in pairs, where each pair is individually tested (i.e., we train an individual GNN for each pair) to determine whether a GNN can distinguish them. |
| Hardware Specification | Yes | All experiments were performed on a machine equipped with an Intel Core i9-10980XE CPU, an NVIDIA RTX4090 graphics card, and 256GB of RAM. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer', 'Siamese network design', and 'cosine similarity loss function' but does not specify their version numbers or the programming language/framework versions used (e.g., Python, PyTorch versions) for replication. |
| Experiment Setup | Yes | We use the Adam optimizer with a learning rate searched from {1e 3, 1e 4, 1e 5}, weight decay selected from {1e 3, 1e 4, 1e 5}, and batch size chosen from {8, 16, 32}. We incorporate an early stopping strategy, which halts training when the loss reaches a small value. The detailed hyperparameter settings for each method are provided in Table 10. |