Weisfeiler-Lehman Meets Gromov-Wasserstein

Authors: Samantha Chen, Sunhyuk Lim, Facundo Memoli, Zhengchao Wan, Yusu Wang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Experimental Results We provide some results showing the effectiveness of our WL distance in terms of comparing graphs. We conduct both 1-NN and SVM graph classification experiments and evaluate the performance of both our lower bound, d (k) WLLB, and our WL distance, d (k) WL, against the WWL kernel/distance (Togninalli et al., 2019), the WL kernel and the WL optimal assignment (WL-OA) (Kriege et al., 2016).
Researcher Affiliation Academia 1Department of Computer Science and Engineering, University of California San Diego, La Jolla, California, USA 2Max Planck Institute for Mathematics in the Sciences, Leipzig, Saxony, Germany 3Department of Mathematics and Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA 4Halıcıo glu Data Science Institute, University of California San Diego, La Jolla, California, USA.
Pseudocode Yes Algorithm 1 d (k) WL computation
Open Source Code Yes Our code is available at https://github.com/chens5/WLdistance
Open Datasets Yes We use several publicly available graph benchmark datasets from TUDatasets (Morris et al., 2020)
Dataset Splits No The paper mentions running 1-NN and SVM graph classification experiments and cross-validating parameters, but does not explicitly detail the train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided.
Software Dependencies No The paper references other kernels and methods by citation but does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their exact versions).
Experiment Setup Yes For all of our experiments, we use q = 0.6 to transform every graph G into the Markov chain Xq(G). ... Additionally, for the SVM method, we cross validate the parameter C {10 3, . . . , 103} and the parameter γ {10 3, . . . , 103}.