Graph Filtration Kernels

Authors: Till Schulz, Pascal Welke, Stefan Wrobel8196-8203

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate the expressive power of our graph kernels and show significant improvements over state-of-the-art graph kernels in terms of predictive performance on various real-world benchmark datasets. We empirically validate our theoretical findings on the expressive power of our kernels and furthermore provide experiments on real-world benchmark datasets which show a favorable performance of our approach compared to state-of-the-art graph kernels.
Researcher Affiliation Academia Till Schulz1, Pascal Welke1, Stefan Wrobel1,2,3 1 Universit at Bonn, Germany 2 Fraunhofer IAIS, Sankt Augustin, Germany 3 Fraunhofer Center for Machine Learning, Sankt Augustin, Germany {schulzth, welke, wrobel}@cs.uni-bonn.de
Pseudocode No The paper describes algorithms and methods in prose and mathematical notation but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Available at https://github.com/mlai-bonn/wl-filtration-kernel
Open Datasets Yes The experiments are conducted on the well-established molecular datasets DHFR, NCI1 and PTC-MR (obtained from Morris et al. 2020) as well as the large network benchmark datasets IMDB-BINARY (obtained from Morris et al. 2020) and EGO-1 to EGO-4.
Dataset Splits Yes We measure the accuracies obtained by support vector machines (SVM) using a 10-fold stratified crossvalidation. A grid search over sets of kernel specific parameters is used for optimal training. We perform 10 such crossvalidations and report the mean and standard deviation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries).
Experiment Setup No The paper mentions a "grid search over sets of kernel specific parameters" for optimal training but does not provide the specific hyperparameter values or ranges used in the main text.