Equivariant Subgraph Aggregation Networks
Authors: Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, Haggai Maron
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A comprehensive set of experiments on real and synthetic datasets demonstrates that our framework improves the expressive power and overall performance of popular GNN architectures. |
| Researcher Affiliation | Collaboration | Beatrice Bevilacqua Purdue University bbevilac@purdue.edu Fabrizio Frasca Imperial College London & Twitter ffrasca@twitter.com Derek Lim MIT CSAIL dereklim@mit.edu Balasubramaniam Srinivasan Purdue University bsriniv@purdue.edu Chen Cai UCSD CSE c1cai@ucsd.edu Gopinath Balamurugan University of Tuebingen gbm0998@gmail.com Michael M. Bronstein Imperial College London & Twitter mbronstein@twitter.com Haggai Maron NVIDIA Research hmaron@nvidia.com |
| Pseudocode | Yes | Algorithm 1 WL Test |
| Open Source Code | Yes | Our code is also available.6 |
| Open Datasets | Yes | We conducted experiments on thirteen graph classification datasets originating from five data repositories: (1) RNI (Abboud et al., 2020) and CSL (Murphy et al., 2019; Dwivedi et al., 2020)... (2) TUD repository (Morris et al., 2020a)... (3) Open Graph Benchmark (Hu et al., 2020) and (4) ZINC12k (Dwivedi et al., 2020). |
| Dataset Splits | Yes | we conducted 10-fold cross validation and reported the validation performances at the epoch achieving the highest averaged validation accuracy across all the folds. |
| Hardware Specification | Yes | We implemented our approach using the PyG framework (Fey & Lenssen, 2019) and ran the experiments on NVIDIA DGX V100 stations. |
| Software Dependencies | No | The paper mentions using 'PyG framework (Fey & Lenssen, 2019)' and 'Weights and Biases framework (Biewald, 2020)' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We used Adam optimizer with learning rate decayed by a factor of 0.5 every 50 epochs. The training is stopped after 350 epochs. As for DS-GNN, we implemented Rsubgraphs with summation over node features... We tuned the batch size in {32, 128}, the embedding dimension of the MLPs in {16, 32} and the initial learning rate in {0.01, 0.001}. |