Topological Pooling on Graphs
Authors: Yuzhou Chen, Yulia R. Gel
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 11 diverse benchmark datasets against 18 baseline models in conjunction with graph classification tasks indicate that Wit-Topo Pool significantly outperforms all competitors across all datasets. |
| Researcher Affiliation | Academia | Yuzhou Chen1, Yulia R. Gel2,3 1Department of Computer and Information Sciences, Temple University 2Department of Mathematical Sciences, University of Texas at Dallas 3National Science Foundation |
| Pseudocode | No | The paper does not contain explicitly labeled pseudocode or algorithm blocks. It describes methodologies using mathematical formulas and text. |
| Open Source Code | Yes | The source code is available at https: //github.com/topologicalpooling/Topological Pool.git. |
| Open Datasets | Yes | We validate Wit-Topo Pool on graph classification tasks using the following 11 real-world graph datasets (for further details, please refer to Appendix B): (i) 3 chemical compound datasets: MUTAG, BZR, and COX2, where graphs represent chemical compounds, nodes are different atoms, and edges are chemical bonds; (ii) 5 molecular compound datasets: PROTEINS, PTC MR, PTC MM, PTC FM, and PTC FR, where nodes are secondary structure elements and edge existence between two nodes implies that the nodes are adjacent nodes in an amino acid sequence or three nearest-neighbor interactions; (iii) 2 internet movie databases: IMDB-BINARY (IMDB-B) and IMDB-MULTI (IMDB-M), where nodes are actors/actresses and there is an edge if the two people appear in the same movie, and (iv) 1 Reddit (an online aggregation and discussion website) discussion threads dataset: REDDIT-BINARY (REDDIT-B), where nodes are Reddit users and edges are direct replies in the discussion threads. |
| Dataset Splits | Yes | For all graphs, we use different random seeds for 90/10 random training/test split. |
| Hardware Specification | Yes | We conduct our experiments on two NVIDIA Ge Force RTX 3090 GPU cards with 24GB memory. |
| Software Dependencies | No | The paper mentions using Adam optimizer and cross-entropy loss function but does not specify software versions for libraries like PyTorch, TensorFlow, or Python. |
| Experiment Setup | Yes | Wit-Topo Pool is trained end-to-end by using Adam optimizer and the optimal trainable weight matrices are trained by minimizing the cross-entropy loss function. The tuning of Wit-Topo Pool on each dataset is done via grid hyperparameter configuration search over a fixed set of choices and the same cross-validation setup is used to tune baselines. In our experiments, for all datasets, we set the grid size of WPI to 5 × 5, and the MLP is of 2 layers where Batchnorm and Dropout with dropout ratio of pdrop {0, 0.1, . . . , 0.5} applied after the fist layer of MLP. |