Self-Attention Graph Pooling

Authors: Junhyun Lee, Inyeop Lee, Jaewoo Kang

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that our method achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters.
Researcher Affiliation Academia Department of Computer Science and Engineering, Korea University, Seoul, Korea.
Pseudocode No The paper contains mathematical equations and figures illustrating the model architecture, but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available on Github 1 https://github.com/inyeoplee77/SAGPool
Open Datasets Yes Five datasets with a large number of graphs (> 1k) were selected among the benchmark datasets (Kersting et al., 2016). The statistics of the datasets are summarized in Table 1.
Dataset Splits Yes In our experiments, we evaluated the pooling methods over 20 random seeds using 10-fold cross validation. A total of 200 testing results were used to obtain the final accuracy of each method on each dataset. 10 percent of the training data was used for validation in the training session.
Hardware Specification Yes Our experiments were run on a NVIDIA Titan Xp GPU.
Software Dependencies No We implemented all the baselines and SAGPool using Py Torch (Paszke et al., 2017) and the geometric deep learning extension library provided by Fey & Lenssen. While software names are mentioned, specific version numbers are not provided.
Experiment Setup Yes We used the Adam optimizer (Kingma & Ba, 2014), early stopping criterion, patience, and hyperparameter selection strategy for the global pooling architecture and hierarchical pooling architecture. We stopped the training if the validation loss did not improve for 50 epochs in an epoch termination condition with a maximum of 100k epochs... The optimal hyperparameters are obtained by grid search. The ranges of grid search are summarized in Table 2.