ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations
Authors: Ekagra Ranjan, Soumya Sanyal, Partha Talukdar5470-5477
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on multiple datasets and theoretical analysis, we motivate our choice of the components used in ASAP. Our experimental results show that combining existing GNN architectures with ASAP leads to state-of-the-art results on multiple graph classification benchmarks. |
| Researcher Affiliation | Academia | 1Indian Institute of Technology, Guwahati 2Indian Institute of Science, Bangalore |
| Pseudocode | Yes | Please refer to Appendix Sec. I for a pseudo code of the working of ASAP. |
| Open Source Code | Yes | We make the source code of ASAP available to encourage reproducible research 1. 1https://github.com/malllabiisc/ASAP |
| Open Datasets | Yes | D&D (Shervashidze et al. 2011; Dobson and Doig 2003) and PROTEINS (Dobson and Doig 2003; Borgwardt et al. 2005) are datasets containing proteins as graphs. NCI1 (Wale, Watson, and Karypis 2008) and NCI109 are datasets for anticancer activity classification. FRANKENSTEIN (Orsini, Frasconi, and De Raedt 2015) contains molecules as graph for mutagen classification. |
| Dataset Splits | Yes | Following SAGPool(Lee, Lee, and Kang 2019), we conduct our experiments using 10-fold cross-validation and report the average accuracy on 20 random seeds. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific version numbers for key software components or libraries used for the experiments. |
| Experiment Setup | Yes | For ASAP, we choose k = 0.5 and h = 1 to be consistent with baselines. Following SAGPool(Lee, Lee, and Kang 2019), we conduct our experiments using 10-fold cross-validation and report the average accuracy on 20 random seeds. Please refer to Appendix Sec. A for further details on hyperparameter tuning. |