Boosting Graph Pooling with Persistent Homology

Authors: Chaolong Ying, Xinjian Zhao, Tianshu Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we apply our mechanism to a collection of graph pooling methods and observe consistent and substantial performance gain over several popular datasets, demonstrating its wide applicability and flexibility. In the experiments, we evaluate the benefits of persistent homology on several state-of-the-art graph pooling methods, with the goal of answering the following questions:
Researcher Affiliation Academia Chaolong Ying, Xinjian Zhao, Tianshu Yu School of Data Science, The Chinese University of Hong Kong, Shenzhen {chaolongying,xinjianzhao1}@link.cuhk.edu.cn, yutianshu@cuhk.edu.cn
Pseudocode No The paper describes methods and processes through text and equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is open-sourced at https://github.com/LOGO-CUHKSZ/TIP.git.
Open Datasets Yes To evaluate the capabilities of our model across diverse domains, we assess its performance on a variety of graph datasets commonly used in graph related tasks. We select several benchmarks from TU datasets [32], OGB datasets [20] and ZINC dataset [43]. We use the default dataset settings from Py G library 3.
Dataset Splits Yes In the graph classification task, all datasets are splitted into train (80%), validation (10%), and test (10%) data.
Hardware Specification Yes The experiments are conducted using an AMD EPYC 7542 CPU and a single NVIDIA 3090 GPU.
Software Dependencies No The paper mentions “All the methods are implemented using Py Torch and Py G [37, 9].” but does not specify version numbers for PyTorch or PyG, which are crucial for reproducibility.
Experiment Setup Yes Hyperparameters. For dense pooling methods, the pooling ratio ranges from [0.1, 0.5], the number of pooling layers is 2, and the hidden dimension is selected from {32, 64}. For the Graclus method we use 2 pooling layers, while for Top K we use 3 pooling layers with a pooling ratio of 0.8. The batch size for all models is uniformly set to 20, and the maximum number of training epochs is 1000. Following the evaluation protocol in [50, 30], we train all models using the Adam optimizer [24] and implement a learning rate decay mechanism, reducing the learning rate from 10 3 to 10 5 with a decay ratio of 0.5 and a patience of 10 epochs. Additionally, we use early stopping based on the validation accuracy with patience of 50 epochs.