Graph Coarsening with Neural Networks

Authors: Chen Cai, Dingkang Wang, Yusu Wang

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on both synthetic and real networks, we demonstrate that our method significantly improves common graph coarsening methods under various metrics, reduction ratios, graph sizes, and graph types.
Researcher Affiliation Academia University of California, San Diego, c1cai@ucsd.edu Ohio State University, wang.6150@osu.edu University of California, San Diego, yusuwang@ucsd.edu
Pseudocode Yes Algorithm 1: Iterative algorithm for edge weight optimization
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes We use the following synthetic graphs: Erd os-R enyi graphs (ER), Barabasi-Albert Graph (BA), Watts-Strogatz Graph (WS), random geometric graphs (GEO). ... We test on five real networks: Shape, Pub Med, Coauthor-CS (CS), Coauthor-Physics (Physics), and Flickr (largest one with 89k vertices)... Pub Med (Sen et al., 2008) ... Flickr (Zeng et al., 2019)
Dataset Splits Yes We randomly sample 25 graphs of size {512, 612, 712, ..., 2912} from different generative models. If the graph is disconnected, we keep the largest component. We train GOREN on the first 5 graphs, use the 5 graphs from the rest 20 graphs as the validation set and the remaining 15 as test graphs.
Hardware Specification Yes All experiments are performed on a single Intel Xeon CPU E5-2630 v4@ 2.20GHz 40 and 64GB RAM machine.
Software Dependencies No The paper mentions software like Pytorch and Pytorch Geometric with citations, and Adam optimizer, but does not provide explicit version numbers for these software dependencies (e.g., PyTorch 1.x).
Experiment Setup Yes All models are trained with Adam optimizer with a learning rate of 0.001 and batch size 600. ... We set the number of layers to be 3 and the embedding dimension to be 50. ... epoch: 50 for synthetic graphs and 30 for real networks. ... walk length: 5000 for real networks. ... number of eigenvectors k: 40 for synthetic graphs and 200 for real networks. ... batch size: 600 ... learning rate: 0.001