Spectral Clustering with Graph Neural Networks for Graph Pooling
Authors: Filippo Maria Bianchi, Daniele Grattarola, Cesare Alippi
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We consider both supervised and unsupervised tasks to compare Min Cut Pool with traditional SC and with other GNN pooling strategies. The Appendix provides further details on the experiments and a schematic depiction of the architectures used in each task. In addition, the Appendix reports an additional experiment on supervised graph regression. |
| Researcher Affiliation | Academia | Filippo Maria Bianchi * 1 Daniele Grattarola * 2 Cesare Alippi 2 3 1NORCE, the Norwegian Research Centre, Norway 2Faculty of Informatics, Universit a della Svizzera italiana, Lugano, Switzerland 3DEIB, Politecnico di Milano, Milano, Italy. Correspondence to: Filippo Maria Bianchi <filippombianchi@gmail.com>. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | The implementation of Min Cut Pool is available both in Spektral1 and Pytorch Geometric.2 1https://graphneural.network/layers/pooling/#mincutpool 2https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.dense.mincut_pool.dense_mincut_pool |
| Open Datasets | Yes | We cluster the nodes of three citation networks: Cora, Citeseer, and Pubmed. We test the models on different graph classification datasets. For featureless graphs, we used the node degree information and the clustering coefficient as surrogate node features. We evaluate model performance with a 10-fold train/test split, using 10% of the training set in each fold as validation for early stopping. |
| Dataset Splits | Yes | We evaluate model performance with a 10-fold train/test split, using 10% of the training set in each fold as validation for early stopping. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper states 'The implementation of Min Cut Pool is available both in Spektral1 and Pytorch Geometric.2' but does not specify version numbers for these or any other software components. |
| Experiment Setup | No | The paper mentions some configuration details like dropping half the nodes and using a fixed network architecture and early stopping, but it lacks specific hyperparameters such as learning rate, batch size, optimizer settings, or explicit training schedules. |