Rethinking pooling in graph neural networks
Authors: Diego Mesquita, Amauri Souza, Samuel Kaski
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we study the extent to which local pooling plays a role in GNNs. In particular, we choose representative models that are popular or claim to achieve state-of-the-art performances and simplify their pooling operators by eliminating any clustering-enforcing component. We either apply randomized cluster assignments or operate on complementary graphs. Surprisingly, the empirical results show that the non-local GNN variants exhibit comparable, if not superior, performance to the original methods in all experiments. [...] We use four graph-level prediction tasks as running examples: predicting the constrained solubility of molecules (ZINC, [20]), classifying chemical compounds regarding their activity against lung cancer (NCI1, [40]); categorizing ego-networks of actors w.r.t. the genre of the movies in which they collaborated (IMDB-B, [45]); and classifying handwritten digits (Superpixels MNIST, [1, 10]). |
| Researcher Affiliation | Academia | Diego Mesquita1 , Amauri H. Souza2 , Samuel Kaski1,3 1Aalto University 2Federal Institute of CearĂ¡ 3University of Manchester {diego.mesquita, samuel.kaski}@aalto.fi, amauriholanda@ifce.edu.br |
| Pseudocode | No | The paper contains mathematical equations but no explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | All methods were implemented in Py Torch [12, 33] and our code is available at https://github.com/AaltoPML/Rethinking-pooling-in-GNNs. |
| Open Datasets | Yes | We use four graph-level prediction tasks as running examples: predicting the constrained solubility of molecules (ZINC, [20]), classifying chemical compounds regarding their activity against lung cancer (NCI1, [40]); categorizing ego-networks of actors w.r.t. the genre of the movies in which they collaborated (IMDB-B, [45]); and classifying handwritten digits (Superpixels MNIST, [1, 10]). |
| Dataset Splits | Yes | We split each dataset into train (80%), validation (10%) and test (10%) data. |
| Hardware Specification | No | The paper mentions 'computational resources provided by the Aalto Science-IT Project' but does not provide specific hardware details such as GPU or CPU models used for experiments. |
| Software Dependencies | No | The paper states 'All methods were implemented in Py Torch' but does not provide specific version numbers for PyTorch or any other software libraries or dependencies. |
| Experiment Setup | Yes | We train all models with Adam [22] and apply learning rate decay, ranging from initial 10 3 down to 10 5, with decay ratio of 0.5 and patience of 10 epochs. Also, we use early stopping based on the validation accuracy. |