Weisfeiler and Lehman Go Topological: Message Passing Simplicial Networks
Authors: Cristian Bodnar, Fabrizio Frasca, Yuguang Wang, Nina Otter, Guido F Montufar, Pietro Lió, Michael Bronstein
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically support our theoretical claims by showing that MPSNs can distinguish challenging strongly regular graphs for which GNNs fail and, when equipped with orientation equivariant layers, they can improve classification accuracy in oriented SCs compared to a GNN baseline. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Technology, University of Cambridge, UK 2Twitter, UK 3Department of Computing, Imperial College London, UK |
| Pseudocode | No | The paper contains mathematical equations and descriptions of procedures, but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | Finally, we study the practical impact of considering higher-order interactions via (non-oriented) clique-complexes and report results for a few popular graph classification tasks commonly used for benchmarking GNNs (Morris et al., 2020a). |
| Dataset Splits | Yes | The dataset contains 1000 train trajectories and 200 test trajectories. (...) The dataset has 160 train trajectories and 40 test trajectories. (...) We employ a SIN model similar to that employed in the SR graph experiments. We follow the same experimental setting and evaluation procedure described in Xu et al. (2019b). Accordingly, we report the best mean test accuracy computed in a 10-fold crossvalidation fashion. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software like PyTorch and GUDHI, but it does not specify exact version numbers for any software dependencies required to reproduce the experiments. |
| Experiment Setup | Yes | All models are trained for 200 epochs using the Adam optimizer (Kingma & Ba, 2015) with an initial learning rate of 1e-3, cosine decay, and a batch size of 32. We use Batch Normalization (Ioffe & Szegedy, 2015) after each linear layer. The output of the final layer is aggregated via sum pooling and passed to a linear layer followed by softmax activation. All activation functions are Re LU. For the SR graph experiments, SIN employs 4 layers, a hidden dimension of 64, and the update function in Eq. 6. For the remaining experiments, MPSN and GNN baselines use 3 layers, a hidden dimension of 16, and the update function in Eq. 6. More details are in Appendix F. |