A Gromov-Wasserstein Geometric View of Spectrum-Preserving Graph Coarsening

Authors: Yifan Chen, Rentian Yao, Yun Yang, Jie Chen

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The study includes a set of experiments to support the theory and method, including approximating the GW distance, preserving the graph spectrum, classifying graphs using spectral information, and performing regression using graph convolutional networks.Code is available at https: //github.com/ychen-stat-ml/ GW-Graph-Coarsening.
Researcher Affiliation Collaboration 1Hong Kong Baptist University 2University of Illinois Urbana Champaign 3MIT-IBM Watson AI Lab, IBM Research.
Pseudocode Yes Algorithm 1 Kernel graph coarsening (KGC).
Open Source Code Yes Code is available at https: //github.com/ychen-stat-ml/ GW-Graph-Coarsening.
Open Datasets Yes We evaluate graph coarsening methods, including ours, on eight benchmark graph datasets: MUTAG (Debnath et al., 1991; Kriege & Mutzel, 2012), PTC (Helma et al., 2001), PROTEINS (Borgwardt et al., 2005; Schomburg et al., 2004), MSRC (Neumann et al., 2016), IMDB (Yanardag & Vishwanathan, 2015), Tumblr (Oettershagen et al., 2020), AQSOL (Sorkun et al., 2019; Dwivedi et al., 2020), and ZINC (Irwin et al., 2012).
Dataset Splits Yes They apply a scaffold splitting (Hu et al., 2020) to the AQSOL dataset in the ratio 8 : 1 : 1 to have 7831, 996, and 996 samples for train, validation, and test sets.
Hardware Specification Yes The algorithms tested are all implemented in unoptimized Python code, and run with one core of a server CPU (Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz) on Ubuntu 18.04.
Software Dependencies No The paper mentions 'unoptimized Python code' and 'Ubuntu 18.04'. It also mentions the 'OT package POT (Flamary et al., 2021, Python Optimal Transport)'. However, it does not specify version numbers for Python, POT, or any other libraries used.
Experiment Setup Yes For the learning rate strategy across all GCN models, we follow the existing setting to choose the initial learning rate as 1 10 3, the reduce factor is set as 0.5, and the stopping learning rate is 1 10 5. Also, all the GCN models tested in our experiments share the same architecture the network has 4 layers and 108442 tunable parameters.