UGC: Universal Graph Coarsening

Authors: Mohit Kataria, Sandeep Kumar, Jayadeva Dr

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on benchmark datasets demonstrate that UGC preserves spectral similarity while coarsening. In comparison to existing methods, UGC is 4 to 15 faster, has lower eigen-error, and yields superior performance on downstream processing tasks even at 70% coarsening ratios.
Researcher Affiliation Academia 1 Yardi School of Artificial Intelligence 2Department of Electrical Engineering 3Bharti School of Telecommunication Technology and Management Indian Institute of Technology Delhi
Pseudocode Yes The pseudocode for UGC is listed in Algorithm 1.
Open Source Code No The paper states "Code is available at UGC" in a footnote, but does not provide a direct, unambiguous link or specific instructions for access within the paper itself to satisfy the "concrete access" requirement.
Open Datasets Yes Our experiments cover widely adopted benchmarks, including Cora ,Citeseer, Pubmed [36], CS, Physics [37], DBLP [38]. Additionally, UGC effectively coarsens large datasets like Flickr, Yelp [39], and Reddit [40], previously challenging for existing techniques. We also present datasets like Squirrel, Chameleon, Texas, Film, Wisconsin [11, 12, 16, 17], characterized by dominant heterophilic factors.
Dataset Splits Yes We randomly split data in 60%, 20%, 20% for the training-validation-test.
Hardware Specification Yes All the experiments conducted for this work were performed on an Intel Xeon W-295 CPU and 64GB of RAM desktop using the Python environment.
Software Dependencies No The paper mentions using 'Python environment' and lists GNN models (GCN, Graph Sage, GIN, GAT) but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes We employed a single hidden layer GCN model with standard hyperparameters values [13] see Appendix H for the node-classification task. Table 8: GNN model parameters. MODEL HIDDEN LAYERS LEARNING RATE DECAY EPOCH GCN {64, 64} 0.003 0.0005 500 GRAPHSAGE {64, 64} 0.003 0.0005 500 GIN {64, 64} 0.003 0.0005 500 GAT {64, 64} 0.003 0.0005 500